I'm interested in "streakiness", or also known as runs, within result sets that ultimately come from RNGs. I haven't learned yet how to see expected vs actual amount of streakiness within a set of outcomes. It always feels to me like it's more than should be expected, but of course my feelings about it are likely pretty biased.
This should be a very easy analysis, if your question is "How do we know if streaks happen more/less frequently than expected from a random raffle?" That said, I'mma be real and say I won't do it, but awaythro or someone else might.
You can break it down into 2 questions:
1. Is someone already on a streak of N wins more likely to win the next raffle than any other entrant?
2. Is the distribution of streaks different than you'd expect? (You have to be careful about how you construct "expect" here, since the probability of a streak occurring varies with each raffle based on: a) whether the previous winner entered, b) how many other people entered)
Both of those are fairly easy to answer; you just need to be careful about constructing expectations.
If you get an interesting result then you probably modeled wrong because i think we can all confidently say, yeah, raffle is random (i mean as far as one would care about randomness for a chat raffle; i'm sure there's interesting non-randomness based on the actual RNG involved, but whatever)
anyway the more interesting question IMO is why does it FEEL like there's streaks. and i think that's tied to construcitng expectations in ways that don't accuont for: the same people tend to be online in streaks, same people tend to join, sometimes it's just a few people joining the raffle, etc. I think it's human intuition that
underestimates or
overreports streaks and that's pretty interesting