Beta · Phase 1 Anti-Tout Science Desk

Trout Tout Pundit accuracy, science-graded. Fade the trout.

Per-source pick-accuracy leaderboard. Sample-size shrunk (Bayesian), 95% confidence intervals. Negative lift means the source's calls have underperformed the sport-position baseline — that's a trout. Fade it.

Updated Sun, 17 May 2026 00:10:00 GMT · refreshes every 6h

Sentiment-as-pick lift

How much better (or worse) than baseline did players score after this source called them out? Sign-weighted: bullish takes "win" when the player exceeds baseline; bearish takes "win" when they underperform. Higher = better.

#1
Thinking Basketball YT
n = 182
+8.01
Lift: +8.01 | 95% CI [+9.09, +14.47] | n=182
#2
Portland Trail Blazers (Official) YT fan
n = 107
+3.08
Lift: +3.08 | 95% CI [+2.03, +5.08] | n=107
#3
JxmyHighroller YT
n = 138
+2.14
Lift: +2.14 | 95% CI [+0.37, +8.25] | n=138
#4
The Bill Simmons Podcast Pod
n = 226
-0.73
Lift: -0.73 | 95% CI [-2.57, +0.82] | n=226
#5
Pro Football Focus news_rss analyst
n = 2 · small sample
-10.76
Lift: -10.76 | 95% CI [-23.90, -16.55] | n=2

Explicit-pick hit rate

Live

When a source makes an explicit over/under call, how often does it settle in their favor? Each pick matched to the closest pickem line + graded against the actual box score. NBA fully wired; NFL needs the nfl_schedules week-bridge (next).

Building sample — no settled explicit picks yet for this filter. Mentions need a non-null prop_implication AND a pickem line scraped within ±14 days AND a settled box-score game in the post-line window.

Methodology. Lift is computed over a 14-day window after each episode published. Shrinkage uses an empirical-Bayes prior with pseudo-count 50 toward neutrality (lift = 0 or hit-rate = 50%). 95% CIs are Wald (lift) and Wilson (hit rate). Small samples (n < 10) appear but with faded CI bars.

Not investment / betting advice. Historical accuracy doesn't predict future performance, especially with thin samples. Use this as one signal among many.