We’ve been busy with the holidays and losing steam posting here, but I wanted to get the results from week 16 posted for posterity and anyone else who is following that closely.

We made a couple random bets, but didn’t plan them out per usual. We were up very slightly.

Here are the results from The Model for week 16:

The Model had another mid week, matching the Vegas odds with 63% win rate (it was 50/50 on it’s 6! underdog picks, which would have paid a bit if you bet them all moneyline) and perfectly 50/50 against the spread.

Here are the Over/Under results:

We are back to shorting the OU model. It won on just 37.5% of it’s bets. For years, it was consistently this bad, and we would bet against it to make money. I am not a good enough machine learning programmer to know what is wrong with my set up (we’re kind of bending the MOV model to get OU picks vs optimizing for OU modeling specifically). It’s something I’d love to hire a better machine learning developer to work out for us.

I’d also like to analyze the OU picks more this year. The results have been really odd. It feels like the Over was an above average bet this year, but I’m not really sure how different that is per usual. Oddly, this week The Model was wrong on 100% of it’s Under bets. Is there something exploitable here?

I will post Model picks and results for week 17 and try to get some picks up for the final week of regular season football. In the playoffs, I often run out all possible games and post it at once. Traditionally The Model underperforms in the playoffs. Teams just play better than usual. A fun upgrade to The Model would be to get some kind of “clutch” data into the system (points from red zone or even win/loss record at the time of the game could help). Maybe encoding coach, player, and team names somehow. This kind of thing becomes more important in the playoffs.

Good luck!

Leave a Reply

Your email address will not be published. Required fields are marked *