Home / Blog /AI Analysis
AI Deep-Dive

We let the model pick for 180 days. Here’s everything it got wrong.

A transparent postmortem of our Smart Pick engine: where it beat random, where it didn't, and what happens when you ask it for "lucky."

NR
Dr. Nadia Reyes AI Research Lead 11 min read Apr 10, 2026

In October 2025, we set up a simple experiment. Every draw, for 180 consecutive days, our Smart Pick model would generate one ticket for 11 lotteries. We’d log everything — the picks, the outcomes, the near-misses, and the weird internal reasoning the model surfaced. Here’s what we found.

The headline: we marginally beat random

Across 1,980 generated tickets, our model’s expected-value per ticket came out 3.2% higher than uniform random. That’s small. It’s also statistically significant, and driven almost entirely by one factor: avoiding overplayed numbers.

The model’s edge wasn’t predicting winners. It was predicting which numbers humans over-buy — and picking around them.

3.2%
average boost in expected value per ticket vs. uniform random — entirely from pot-dilution avoidance, not from “predicting” draws.

Where it failed

When we asked the model for “lucky” picks, the edge vanished. The model interpreted “lucky” as “human-like” and clustered its picks around birthdays, sevens, and symmetrical sequences — exactly the numbers our statistical engine was trying to avoid.

The “7” problem

The number 7 appeared in 42% of our lucky-mode tickets, compared to ~16% expected. We’ve since rewired the prompt layer.

AI Machine Learning Smart Pick AI Analysis
NR
Dr. Nadia Reyes
AI Research Lead

Nadia leads AI research at SmartLotto. PhD in computational statistics from University of Melbourne.

Scroll to Top
🔞 Gamble Responsibly. 18+ only.  SmartLotto is a statistical analysis and entertainment platform. Our tools do not change your odds of winning.  |  For help with problem gambling, visit gamblinghelponline.org.au or call 1800 858 858 (free, 24/7).