In October 2025, we set up a simple experiment. Every draw, for 180 consecutive days, our Smart Pick model would generate one ticket for 11 lotteries. We’d log everything — the picks, the outcomes, the near-misses, and the weird internal reasoning the model surfaced. Here’s what we found.
The headline: we marginally beat random
Across 1,980 generated tickets, our model’s expected-value per ticket came out 3.2% higher than uniform random. That’s small. It’s also statistically significant, and driven almost entirely by one factor: avoiding overplayed numbers.
The model’s edge wasn’t predicting winners. It was predicting which numbers humans over-buy — and picking around them.
Where it failed
When we asked the model for “lucky” picks, the edge vanished. The model interpreted “lucky” as “human-like” and clustered its picks around birthdays, sevens, and symmetrical sequences — exactly the numbers our statistical engine was trying to avoid.
The “7” problem
The number 7 appeared in 42% of our lucky-mode tickets, compared to ~16% expected. We’ve since rewired the prompt layer.