When you calculate pot odds in poker, what you’re calculating is expectation. Let’s say all the cards are out in a hand of Holdem, you’re heads up with a single player, and there’s 90 dollars in the pot. You assess that you have a 20% chance of winning the hand.

If he bets 10 dollars, bringing the pot to $100, then you figure out your expectation by multiplying 100 x .20 = 20 dollars. This means that by calling, you stand to win $20 on average, so you’re certainly willing to pay $10 for that chance. If instead you determined that the bet was worth less than $10, you’d fold.

Simple. You’re willing to pay up the amount that a bet is worth to you, on average.

In the language of math, if $$X$$ is a random variable (like the payoffs for outcomes of a bet), the expectation $$E[X]$$ is defined

$$! E[X] = p_1X_1 + p_2X_2 + \cdots + p_nX_n,$$

where for each $$i$$, $$p_i$$ is the probability of the random event $$X_i$$, assuming that the probabilities are all nonnegative and sum to 1.

Let’s see how the expectation calculation works on a hypothetical game. Here are the rules:

- You pay an entry fee, which the house keeps.
- The house flips a coin until the first heads comes up, at which point the game ends.
- If that first heads is on the first flip, you win 1 dollar.
- If it’s on the second flip, you win 2 dollars.
- If it’s on the third flip, you win 4 dollars.
- Your payout continues to double in this way with each flip, so that if the first heads comes on the nth flip, you win $$2^{n-1}$$ dollars.

So, how much would you be willing to pay to play this game?

Let’s use the above expectation formula to determine what this game should be worth.

First, the probabilities: The chance of first getting heads on the first flip is 1/2. The chance of first getting heads on the second flip is $$(1/2)(1/2) = 1/4$$. The third flip, 1/8. Continuing in this way, we know that the probability of the first heads coming on the nth flip is $$1/2^n$$.

Multiplying each probability by the corresponding payout, we get

$$!E[X] = (1/2)*1 + (1/4)*2 + (1/8)*4 + \cdots$$ or

$$!E[X] = 1/2 + 1/2 + 1/2 + \cdots = \infty$$

See what happens? Each term in the sum simplifies to 1/2, and there are infinitely many of them. Thus, the expected value of the game is infinite!

If you’re using expected value as your criterion for deciding whether to play, there’s no amount you’d be unwilling to pay for a single chance to play the game. And you’d be pretty weird, because nobody else in their right mind would pay more than maybe 20 bucks for it.

This famous problem is known as the St. Petersburg paradox. For several hundred years, mathematicians and economists have argued about the reasons nobody would pay any significant amount for the chance to play this infinite-expectation game.

One solution uses the fact that people are risk averse—in general, people don’t like taking risks with large amounts of money. So even if a game has a huge expected value, it’s hard to justify betting the farm if you have such a big chance of losing it right away (like if heads comes up on the first flip and you win a Previewmeasly dollar).

But that solution, which can be made precise by dealing with expected utility gain rather than expected wealth gain, isn’t satisfactory. Why? Because you can always tweak the payouts of the game to create an even more favorable game. No matter the utility function being used, you can always invent a positive-expected-utility game that’s just as unappealing as this one.

**A better explanation is that no casino in the world has an infinite bank. **Nobody could ever pay out the huge sums that occur if heads takes, say, 40 flips to show up.

Wikipedia has an interesting chart that explains this. If, say, the casino couldn’t pay you more than $100 dollars, then your expectation is quite low ($4.28), because most of the expectation from the game comes from the huge payouts that are possible from an unlimited bank. Even if the casino could pay out a million dollars, your expectation is barely more than $10!

Banker |
Bankroll |
Expected value of lottery |

Friendly game | $100 | $4.28 |

Millionaire | $1,000,000 | $10.95 |

Billionaire | $1,000,000,000 | $15.93 |

Bill Gates (2008) | $58,000,000,000 | $18.84 |

U.S. GDP (2007) | $13.8 trillion | $22.79 |

World GDP (2007) | $54.3 trillion | $23.77 |

Googolaire | $10^{100} |
$166.50 |

Not too much, that I can think of. But it’s interesting, nonetheless.

One thing we can learn from the first, unsatisfactory solution: If we are risk averse when it comes to large wagers, then we need to account for the riskiness of a wager in addition to simply the expectation.

So, for example, even though taking odds on a pass line bet is “the best bet in the casino,” that doesn’t mean you should put your entire bankroll on it. Sure, the odds bet is a fair, zero-expectation bet, the only one of its kind in the casino. And making a minimal passline bet and putting everything you have behind it is as close to a fair bet as you can get.

But you don’t do it, because it’s too risky. A single seven-out, and your Vegas trip is over.

Expectation matters. But so does risk, volatility, exposure, or whatever you want to call that mysterious thing that makes gambling so much fun.

]]>Well, it’s Saturday, I’m making good on that promise. Here they are, in order of strongest overall to weakest overall

Team | Offense | Defense | Total (=Off-Def) |

Pittsburgh | 9.3454 | -12.8488 | 22.1942 |

Atlanta | 7.998 | -9.2423 | 17.2403 |

Green Bay | 9.3868 | -6.5442 | 15.931 |

Tennessee | 3.4918 | -8.0904 | 11.5822 |

Chicago | 3.6899 | -6.6026 | 10.2925 |

NY Jets | -1.0105 | -9.9763 | 8.9658 |

Philadelphia | 7.0592 | -1.5056 | 8.5648 |

Miami | 5.4238 | 1.781 | 3.6428 |

Baltimore | -1.2518 | -4.5747 | 3.3229 |

Kansas City | -3.0222 | -6.1722 | 3.15 |

Minnesota | -9.2154 | -12.2434 | 3.028 |

Indianapolis | 1.3655 | -1.3805 | 2.746 |

New Orleans | 4.4175 | 1.8985 | 2.519 |

Dallas | -3.4686 | -5.4919 | 2.0233 |

New England | 13.5998 | 11.6069 | 1.9929 |

Detroit | 5.7315 | 4.7494 | 0.9821 |

San Diego | 4.8438 | 5.4278 | -0.584 |

Houston | 5.8486 | 7.742 | -1.8934 |

Cleveland | -1.8498 | 0.4057 | -2.2555 |

Cincinnati | -4.6325 | -1.8317 | -2.8008 |

Seattle | -3.6599 | 0.0247 | -3.6846 |

Tampa Bay | -2.1605 | 2.139 | -4.2995 |

Buffalo | -6.334 | -1.2093 | -5.1247 |

St. Louis | -6.8654 | -1.6255 | -5.2399 |

Washington | -2.2807 | 4.1093 | -6.39 |

Arizona | -2.2533 | 4.7989 | -7.0522 |

Oakland | -0.7469 | 6.4701 | -7.217 |

Denver | -1.4403 | 6.3187 | -7.759 |

NY Giants | -1.4845 | 11.6852 | -13.1697 |

San Francisco | -5.6359 | 9.0157 | -14.6516 |

Jacksonville | -10.8193 | 4.4598 | -15.2791 |

Carolina | -14.07 | 6.7069 | -20.7769 |

A few things to note about these ratings. First the numbers are in points-scored/allowed-above-average-per-game. So for offenses, positive numbers are good. For defenses, negative numbers are good. And when you subtract the defensive points from the offensive points, you get total point differential, or what I’m calling the strength rating.

If you’re observant, you’ll notice something else. The total numbers don’t match the numbers I computed in my last post, when the model didn’t break things down into offense and defense. So what gives?

Well, I discovered a minor coding error in what I did last time. I fixed it, so now when we compute single strength ratings and compare them to the offense/defense totals, we get:

Team | Single Output | O-D Total (from above) |

Pittsburgh | 22.0813 | 22.1942 |

Atlanta | 16.9875 | 17.2403 |

Green Bay | 15.4567 | 15.931 |

Tennessee | 11.8251 | 11.5822 |

Chicago | 10.2158 | 10.2925 |

NY Jets | 8.8681 | 8.9658 |

Philadelphia | 8.1996 | 8.5648 |

Miami | 3.2691 | 3.6428 |

Baltimore | 2.9968 | 3.3229 |

Kansas City | 3.2809 | 3.15 |

Minnesota | 2.9843 | 3.028 |

Indianapolis | 3.0042 | 2.746 |

New Orleans | 2.5907 | 2.519 |

Dallas | 2.0974 | 2.0233 |

New England | 1.8985 | 1.9929 |

Detroit | 0.6533 | 0.9821 |

San Diego | -0.5271 | -0.584 |

Houston | -1.4855 | -1.8934 |

Cleveland | -2.4303 | -2.2555 |

Cincinnati | -3.0073 | -2.8008 |

Seattle | -3.3511 | -3.6846 |

Tampa Bay | -4.1288 | -4.2995 |

Buffalo | -5.6052 | -5.1247 |

St. Louis | -5.0459 | -5.2399 |

Washington | -5.998 | -6.39 |

Arizona | -7.2695 | -7.0522 |

Oakland | -7.3101 | -7.217 |

Denver | -7.3261 | -7.759 |

NY Giants | -12.736 | -13.1697 |

San Francisco | -14.6399 | -14.6516 |

Jacksonville | -15.0712 | -15.2791 |

Carolina | -20.4773 | -20.7769 |

The numbers very nearly match, which they should. The difference can be attributed to randomness in the optimization routine used to the solve the system of equations, and its effect on bets will be negligible. (In other words, we’d never bet a game where the line was close enough to ours that this small difference affected the side we took.)

So I’ll show you how we’d use these strength ratings to pick a few games. First, notice that when we combine offense and defense ratings into a single number, we’re throwing away some good information, since we could have gotten that single number without breaking it down. For now, that’s ok, but in the future, we’ll want to use that extra information to possibly say something about matchups.

Maybe, for example, when a bad offense plays against a bad defense, they often score a lot of points, but when a great offense plays a great defense, they don’t score much, in general. This type of information would certainly be useful for choosing bets, but which we can’t do that without significant backtesting. So for now, we just ignore the offense/defense breakdown. But even just using the single strength rating, there’s still one more thing we need to do before we can really interpret these as meaningful values.

Notice how large some of the values are. For example, Pittsburgh’s rating is a whopping 22 points per game, while their opponent Baltimore’s is a more reasonable 3.

Interpreting the ratings as points per game, as is natural, this tells us the Steelers should beat the Ravens by a 19 points, before we even consider that their home field advantage. 19 POINTS?? They’re only a 2 point favorite, at home!

So what’s the problem? Well, remember how last time I mentioned that we haven’t yet accounted for reversion to the mean? In general, teams that look really good right now aren’t going to be this good all year, and teams that look really bad aren’t going to be quite this bad all year. As the data piles up, teams will all tend to look more average than they do now, as a rule.

Let’s see what we can do to quantify this and make an adjustment. Check out this histogram of NFL teams’ strength ratings going into Week 4, as computed by our model, from 2002-2007:

You can see that most of the teams’ ratings fall between -10 and 10, but with some in the -30 to -40 range on the low end, and the 20- to 30 range on the high end. Compare this to the same graph, but of teams’ ratings heading into Week 16:

Ahh, a much tighter distribution! There’s just one amazing outlier, a team that managed to be 20 points better than most of the league for the entire season. Three guesses as to which model-dating, prettyboy quarterback engineered that one. (Of course, we all know their coach was cheating.)

Jealousy aside, we need to account for this somehow. We do it by looking at variance, a measure of how widely spread around the mean a set of datapoints is. What we find is this: *The variance of the Week 4 ratings is very large, about 148.3, while the variance of the Week 16 ratings is only about 41.9. *

This means that in order to compare teams in Week 4, we need to shrink the variance of our Week 4 ratings by a factor of $$148.3/41.9\approx 3.5$$.

In this case, since the mean is zero (remember, we required that of values when we solved for the solution), the variance $$\sigma^2$$ can be expressed $$\sigma^2 = E(X^2).$$ Since we want to shrink the variance by 3.5, this means we need to divide the X’s (our Week 4 strength ratings) by a factor of $$\sqrt{3.5}$$ in order to make the spread of our team abilities match what we know it should look like after lots of games have revealed teams’ true abilities. These “variance-adjusted strengths” appear below.

Team | Reduced-Variance Strength |

Pittsburgh | 11.7381 |

Atlanta | 9.0303 |

Green Bay | 8.2165 |

Tennessee | 6.2860 |

Chicago | 5.4306 |

NY Jets | 4.7141 |

Philadelphia | 4.3588 |

Miami | 1.7378 |

Baltimore | 1.5930 |

Kansas City | 1.7441 |

Minnesota | 1.5864 |

Indianapolis | 1.5970 |

New Orleans | 1.3772 |

Dallas | 1.1149 |

New England | 1.0092 |

Detroit | 0.3473 |

San Diego | -0.2802 |

Houston | -0.7897 |

Cleveland | -1.2919 |

Cincinnati | -1.5986 |

Seattle | -1.7814 |

Tampa Bay | -2.1948 |

Buffalo | -2.9796 |

St. Louis | -2.6823 |

Washington | -3.1884 |

Arizona | -3.8643 |

Oakland | -3.8859 |

Denver | -3.8944 |

NY Giants | -6.7702 |

San Francisco | -7.7823 |

Jacksonville | -8.0116 |

Carolina | -10.8854 |

It’s these numbers that we should use to make our picks. I’ll emphasize again that *this is a simplified version of the model*, so I wouldn’t advise making any bets with it yet (unless strictly for fun). We’re not even accounting for any injuries yet!

But for the fun of it, here’s what we’d do if we were betting.

Look at the Baltimore-Pittsburgh game. The Steelers are laying 2 points. We have the Ravens at 1.59; the Steelers are the best team in the league at 11.73, so the difference is $$11.73-1.59=10.14.$$ Add to that 2.5 for the Steelers homefield advantage, and they should be a 12.5-point favorite. (Yeah, I know that’s huge. That happens early in the year, because the model uses no preconceived notions about the team abilities, only what’s in the data.)

So in this case, much as I hate to say it, the model would take the Steelers. I’ll do one more, then you’re on your own:

- Denver +7 at Tennessee. Denver = -3.89, Tennessee = 6.28. So the difference is greater than 10, in favor of Tennessee, plus home field for them. Way more than 7, so we take Tennessee, laying the points.

Get the idea?

Continuing in this way, the model would pick:

**Pittsburgh minus 2****Denver plus 7**[This is a typo. According to the above, the pick should have been Tennessee minus 7 (which is a loser).]- Cleveland plus 3
- Atlanta minus 7
- Philadelphia minus 6
**Chicago plus 3.5**- Miami plus 1

The bold are the most favorable bets, according to the model. The other games listed are still good, and for games not listed, the model’s prediction was too close to the actual line to make a bet. (For the record, JMS, it likes your Lions getting 14, but just barely.)

In the future, I’d like to find a better way—possibly using pattern-recognition or other machine-learning software—to recognize favorable bets, given the offense/defense ratings, week number, and spread. That’s something I didn’t get anywhere with when I tried it before, but I’m hoping a few bright minds out there will have some suggestions.

So there you go. Week 4 NFL picks, from our very (right now) basic model. I’m planning to introducing a little more each week, along with an additional post every week about another topic, just to lighten things up. Sound good?

If you’re on board for that, subscribe to Thinking Bettor to make sure you never miss a post. Come on, it’s free!

]]>Since the model came about as a way of computing strength-of-schedule, it’s particularly good at comparing two teams who have few or no common opponents, where humans have difficulty. For this reason, it has a lot of potential for betting on college football, tennis, and international soccer, where these types of matches arise frequently and make it hard for handicappers to accurately compare the teams playing.

*A note on the math: I’m including a few equations here, but nothing fancy. I’ll try to explain in words any equations I write, so if it feels too much like school, you can probably ignore the math and still get the gist. I plan on setting up static pages with more detailed mathematics so that you can refer to them if you decide you want to go deeper with the nerdy stuff.
*

For the sake of familiarity, let’s assume we’re talking about the NFL. You’re considering a bet on the Houston Texans this weekend, Week 4.

We’d like to quantify just how good the Texans are and to do the same for their opponent, the Raiders, to decide on the likely outcome of the game.

So we look at who the Texans have played and how they did. We’ve seen the Texans beat the Colts and Redskins this year, by 10 points and 3 points, but they lost to the Cowboys, by 14. Does this all make them good or bad? We need to know more.

So what do we do next? Of course, we have to look at how good their previous opponents, the Colts, Redskins, and Cowboys are.

But almost immediately, we run into the problem.

**The Problem:** If we try to evaluate these three teams the same way, by looking at who they’ve played and their margins of victory or defeat, we’re stuck. They’ve each played Texans, the team we’re trying to evaluate in the first place! So we have no obvious way of determining how good they are, leaving us with nothing to say about how good the Texans are.

We could look at the other games these teams have played, but the same problem arises in trying to evaluate *their* opponents. As we get further into the season the web of games played becomes more complicated, and the problem becomes overwhelming.

It turns out that with a little creativity and a computer, this problem can be solved. We’ve seen that looking at each team one at a time, won’t work, so we have to set the problem up as a system of equations and write a program to solve it. Let’s look at the most basic way to do this.

Recall that for each game a team has played, we’re using only two pieces of information: the strength of the opponent and the margin of victory or defeat. (We’ll need to eventually incorporate more information, but this simplicity is what allows the model to be used for almost any sport.)

So how should we use that information for each game? In the simplest setup, we can simply add the two numbers together. Let $$x_i $$ be the strength of team $$i$$, and $$m_{i,j}$$ be the margin of team $$i$$’s game against team $$j$$. (Right now, we’re assuming nobody has played anyone twice yet.) So for Houston’s win over Indy, the contribution to Houston’s strength rating looks like

$$m_{hou,ind} + x_{ind} = 10 + x_{ind}$$.

Notice that the 10 is positive, since Houston won the game, and notice also that the larger the opponent’s strength rating and margin of victory, the larger the sum will be.

Now let’s say that a team’s strength rating will be the average of these terms for each game it has played. Then in Houston’s case,

$$x_{hou} = \frac{1}{3}\big((m_{hou,ind} + x_{ind}) + (m_{hou,was} + x_{was}) + (m_{hou,dal} + x_{dal})\big)$$ or

$$x_{hou} = \frac{1}{3}\big((10+x_{ind})+(3+x_{was})+(-13+x_{dal})\big)$$.

Set up an equation like this for each team, and we’re left with a system of 32 equations in 32 unknowns. This system won’t always have an exact solution, but we can use an optimization routine to find the set of team strength ratings that best fits the data. (That is, the total error in the system is minimized.)

We need to make one final assumption in order to get values that have any meaning to us. Notice that the units in the margin of victory numbers $$m_{i,j}$$ are simply “points.” To be consistent, we want the units for the strength ratings to be “points” as well. So if we require that the $$x_i$$’s sum to 0, we can interpret them as the amount of points per game by which each team is better or worse than average, after accounting for strength of schedule. Make sense?

Finally, we’re in a position to calculate the team strengths. But first, please note: You should NOT bet based on these numbers! The actual model I use for betting is far more complicated than this (you didn’t think beating the sportsbooks was *that* easy, did you?). I’ve calculated these numbers using only the simple model I’ve explained above, in addition to a small homefield adjustment to each margin of victory.

Team | Strength (in points) |

Arizona | -5.8292 |

Atlanta | 10.7724 |

Baltimore | 2.1843 |

Buffalo | -6.6633 |

Carolina | -13.4239 |

Chicago | 5.2064 |

Cincinnati | -0.9725 |

Cleveland | -1.4321 |

Dallas | 1.143 |

Denver | -2.9387 |

Detroit | -2.1692 |

Green Bay | 9.2993 |

Houston | -0.3437 |

Indianapolis | 4.7844 |

Jacksonville | -10.3225 |

Kansas City | 4.4525 |

Miami | 0.7156 |

Minnesota | 0.4497 |

New England | 0.8388 |

New Orleans | 0.8683 |

NY Giants | -8.1594 |

NY Jets | 5.5405 |

Oakland | -5.2587 |

Philadelphia | 5.0974 |

Pittsburgh | 14.5176 |

San Diego | 1.9832 |

San Francisco | -10.1833 |

Seattle | 0.3264 |

St. Louis | -2.3638 |

Tampa Bay | -2.9795 |

Tennessee | 8.6363 |

Washington | -3.7762 |

At a glance, we see that Pittsburgh is the best (even without Roethlisberger, a scary thought), and that Carolina is the worst. Sounds about right.

Now, if you were to bet based on these numbers, here’s how you’d do it. (Like I said, this wouldn’t help you much. But because the market is relatively efficient, you wouldn’t be any worse off this way than you’d be by betting randomly.)

Let’s say you actually wanted to bet that Houston-Oakland game. Look at the chart and see that Houston is .34 points worse than average. Oakland is about 5.25 points worse than average. Subtracting, we find that Houston is just less than 5 points better than Oakland. But since Oakland is home, add 2.5 points to the difference, and our model predicts that Houston should win by 2-3 points. Since the spread has Houston laying 4 points, we’d take Oakland with the points. Simple, right? (Actually, with the line that close to our projected outcome, this probably wouldn’t be a bet at all.)

But again, this is an overly-simplified model. We’re failing to account for a lot of stuff here, including mean reversion, any injuries that make the past data a poor representation of the teams that will be on the field on Sunday, and more.

Here are a few ideas I have for improving the model, some of which I’ve already implemented and others which I haven’t. I’m listing them here in hopes that they’ll get the wheels turning in your head to come up with more ideas about how we might make this better.

- Using a “forgetting factor” to weight recent games more strongly than games from several weeks ago
- Using thresholding or tapering to minimize the impact of blowouts
- Our additive model for a team’s strength can be interpreted as “average margin of victory plus average opponent strength.” A multiplicative model would better capture interaction between teams
- Rather than using margin of victory, use yards, turnovers, and other data that is less noisy (luck impacts the final score more than the final yard and turnover totals). Convert this back to points after it is output from the model, using some form of regression, to make bets
- Find a way to efficiently account for injuries by determining how many points an injured or returning player is worth
- Use a pattern-recognition algorithm to determine what a favorable bet “looks like” in terms of the spread and team strength ratings
- Filter data through a neural network or other filter to determine best way to combine team strengths and current week number to determine expected outcome. Use a model which produces offensive and defensive ratings to capture interactions and use current week number to account for mean reversion

The final point above mentions an offensive/defensive model, one which uses points scored and points allowed rather than simply margin of victory. In fact, it’s very easy to expand the basic model in this way, with the nice result that for each team, the offensive and defensive ratings sum to the total ratings from above.

There’s not much we can do with this new information, since the model is still additive (once we add them, we get the same implied bets as in the simpler model). Still, this is the model we want to build upon, so I’ll do my best to run this version of the system and publish the results by this weekend.

Alright, that’s plenty for today. This math-on-a-blog stuff is brand new to me, so I’m to figure out the right balance between what’s interesting and necessary and what will put you to sleep. So let me know what you think. We could certainly go much deeper into the math with some matrix theory, even for the simple model in this post, but I’m doing my best right now to keep my inner nerd at bay.

I’m going to try to start posting more frequently here, maybe two or three times a week. So check back soon and subscribe to get post updates automatically.

]]>For simplicity, compare it to a stock market. You invest some money in a position, and at some future time you get a payoff (which may be zero). Prices of the positions (or odds on bets) change as new information becomes available. When demand is high and lots of people want a certain stock or side of a bet, prices rise. (In sports betting, the price is reflected by the odds, which pay less as a bet becomes more likely to win.) When everyone’s selling, or betting the other side, the price falls.

There’s an essential lesson to be gleaned from this comparison, and shockingly, it’s one that many casual sports bettors never get. It’s the notion of market efficiency, a concept that (unfortunately) makes it a lot harder to win consistently. But it’s important to understand, because it makes it clear exactly what is necessary in order to win.

The main point of the Efficient Market Hypothesis is this: *All public information, as soon as it becomes available, is immediately reflected in the price of an asset*. Information travels so quickly, and trading systems are so sophisticated, that as soon as fresh news comes out regarding a company, buyers bid up the price of a stock or sellers bid it down. Key here is that it happens *instantaneously.*

The conclusion: No public information, whether from a chart of historical prices or from an up-to-the-minute newswire, is useful in helping to predict the future price of a stock. By the time you can react to the public news that a startup tech company just got a huge government contract, the market will have already driven up the price to where it should be. (Note: Private information is a different story, and that’s why Martha Stewart got in trouble.)

Not everyone agrees that world financial markets are completely efficient. But there’s a lot of evidence that, at least to some extent, they are.

However efficient financial markets may be, you can bet that sports betting markets are less so—good news, if you’re trying to beat them.

Why? Because they’re less liquid. While there are millions of participants in financial markets, a small sportsbook may be catering to a few dozen players.

Let’s say a bookie has gotten a few bets on the Saints for Monday Night Football tonight, but nothing on the 49ers. The goal of a smart bookmaker is to balance the book: He wants equal money on both sides of the game, so that he wins regardless of the outcome, by paying the winners slightly less than even money. For him, it’s about limiting or eliminating risk, not gambling.

So he hopes to get a few bets on the underdog 49ers today to balance the book, and as incentive for players to bet on the ‘9ers, offers to lay 7 points instead of the 6 that everyone else is laying. But what happens when news comes out that Saints QB Drew Brees is only 80% for tonight’s game, due to an ankle injury that flared up at the end of the week? (Note: This didn’t happen, so don’t use it to bet!)

Now the bookie is in a tough position. Given this news, he should probably move the spread back to Saints-minus-6 or even minus-5 or 4, but he also has to think about his unbalanced book. It’s very likely that he won’t move his line until some money comes in on the 49ers, or at least that he won’t move it enough. And if this happens, you’ll have a chance to get good line on the 49ers before he does.

In this case, the lack of participants in the market has made it illiquid, and therefore inefficient. The bookmaker can’t fully incorporate the information about the injury into his line because of concerns about his own risk, so there’s opportunity to get a favorable bet down if you can quickly adjust whatever predictive model you’re using to account for the new information.

Of course, most larger sportsbooks won’t have this problem. They won’t be overly concerned with balancing their books if they have the bankroll to handle the risk associated with leaving a book unbalanced in order to incorporate all available information into their line.

In other words, it won’t always be this easy.

So let’s assume that sports betting markets aren’t completely efficient but are fairly close—in other words, it’s possible to win, but you have to be really good—which seems like an accurate assumption to me. In this case, “obvious” information is not going to help you, as it’s almost certainly already incorporated in the odds.

*Example: When two high-powered offenses are playing, someone on sports-talk radio inevitably says, “You’d have to be an idiot to take the under in that game!”* Well, it turns out he’s actually the idiot. True, the teams are likely to score a lot of points. But, of course, the over/under line is set to reflect this information.

Same goes for any other information that’s widely known. It simply will not help you.

In an efficient market, betting with information everyone knows is the same as betting with no information at all. You’ll win half the time, like everyone else. And in the long run, the vig will wipe you out.

**If you’re going to make a long-term profit from sports betting, it has to come from one of two sources.**

Either:

- You have information nobody else has; or
- You can process information better than anyone else

As someone with a math and statistics background (and without any inside sources), I’m interested in the second option.

Processing information can be as simple as using your “feel” for the game, the teams, the matchups, and the like. But I don’t think you can win this way, unless you are truly something special and live and breathe the sport you’re betting on.

Instead, I prefer to build mathematical models to process the ample numbers that are now available for any sporting event. While everyone has access to these statistics, my feeling is that the mounds of data contain patterns of information that human beings are unable to recognize due to limited processing power in our brains.

With this in mind, in my next post I’m going to introduce the model I built for predicting outcomes of football games. It’s one that I bet with for an entire NFL season, and picked well above the 55% rate needed to beat the house edge.

So why am I not rich? Well, I didn’t make money that season. My system for sizing bets was flawed (or maybe, the victim of bad luck) so I lost enough large bets that the system didn’t come out on top. More importantly, manual accounting for injuries and other hard-to-automate information required far more hours than I had to give, so I eventually abandoned the system.

And that’s why I’m going to share the model here. Not for the purpose of selling picks, but because it’s interesting and I believe that making it public is the best way to make it better. (What if I could convince 32 passionate people to each handle the injuries for a single team that they follow anyway?)

Then again, people like picks. You didn’t think I could resist posting those, did you?

Sound like fun? Subscribe to Thinking Bettor so you’ll get that post and other new ones delivered automatically.

]]>Mazur uses this story to backup an argument which holds that, at least until very recently, many roulette wheels were not at all fair.…there is an authentically verified story that sometime in the 1950s a [roulette] wheel in Monte Carlo came up

eventwenty-eight times in straight succession. The odds of that happening are close to 268,435,456 to 1. Based on the number of coups per day at Monte Carlo, such an event is likely to happen only once in five hundred years.

Assuming the math is right (we’ll check it later), can you find the flaw in his argument? The following example will help.

Imagine you hand a pair of dice to someone who has never rolled dice in her life. She rolls them, and gets double fives in her first roll. Someone says, “Hey, beginner’s luck! What are the odds of that on her first roll?”

Well, what are they?

There are two answers I’d take here, one much better than the other.

The first one goes like this. The odds of rolling a five with one die are 1 in 6; the dice are independent so the odds of rolling another five are 1 in 6; therefore the odds of rolling double fives are

$$(1/6)*(1/6) = 1/36$$.

1 in 36.

By this logic, our new player just did something pretty unlikely on her first roll.

But wait a minute. Wouldn’t ANY pair of doubles been just as “impressive” on the first roll? What we really should be calculating are the odds of rolling doubles, not necessarily fives. What’s the probability of that?

Since there are six possible pairs of doubles, not just one, we can just multiply by six to get 1/6. Another easy way to compute it: The first die can be anything at all. What’s the probability the second die matches it? Simple: 1 in 6. (The fact that the dice are rolled simultaneously is of no consequence for the calculation.)

Not quite so remarkable, is it?

For some reason, a lot of people have trouble grasping that concept. The chances of rolling doubles with a single toss of a pair of dice is 1 in 6. People want to believe it’s 1 in 36, but that’s only if you specify *which *pair of doubles must be thrown.

This same mistake is what causes Joseph Mazur to incorrectly conclude that because a roulette wheel came up even 28 straight times in 1950, it was very likely an unfair wheel. Let’s see where he went wrong.

There are 37 slots on a European roulette wheel. 18 are even, 18 are odd, and one is the 0, which I’m assuming does not count as either even or odd here.

So, with a fair wheel, the chances of an even number coming up are 18/37. If spins are independent, we can multiply probabilities of single spins to get joint probabilities, so the probability of two straight evens is then (18/37)*(18/37). Continuing in this manner, we compute the chances of getting 28 consecutive even numbers to be $$(18/37)^{28}$$.

Turns out, this gives us a number that is roughly twice as large (meaning an event twice as rare) as Mazur’s calculation would indicate. Why the difference?

Here’s where Mazur got it right: He’s conceding that a run of 28 consecutive *odd* numbers would be just as interesting (and is just as likely) as a run of evens. If 28 odds would have come up, that would have made it into his book too, because it would be just as extraordinary to the reader.

Thus, he doubles the probability we calculated, and reports that 28 evens in a row or 28 odds in a row should happen only once every 500 years. Fine.

Here’s the problem: He fails to account for several more events that would be just as interesting. Two obvious ones that come to mind are 28 reds in a row and 28 blacks in a row.

There are 18 blacks and 18 reds on the wheel (0 is green). So the probabilities are identical to the ones above, and we now have two more events that would have been remarkable enough to make us wonder if the wheel was biased.

So now, instead of two events (28 odds or 28 evens), we now have four such events. So it’s almost twice as likely that one would occur. Therefore, one of these events should happen about every 250 years, not 500. Slightly less remarkable.

What about a run of 28 numbers that exactly alternated the entire time, like even-odd-even-odd, or red-black-red-black? I think if one of these had occurred, Mazur would have been just as excited to include it in his book.

These events are just as unlikely as the others. We’ve now almost doubled our number of remarkable events that would make us point to a broken wheel as the culprit. Only now, there are so many of them, we’d expect that one should happen every 125 years.

Finally, consider that Mazur is looking back over many years when he points out this one seemingly extraordinary event that occurred. Had it happened anytime between 1900 and the present, I’m guessing Mazur would have considered that recent enough to include as evidence of his point that roulette wheels were biased not too long ago.

That’s a 110-year window. Is it so surprising, then, that something that should happen once every 125 years or so happened during that large window? Not really.

Slightly unlikely perhaps, but nothing that would convince anyone that a wheel was unfair.

]]>It’s easy to say 12-4. Extrapolating three wins and a loss over a 16-game season gives us 12 wins, four losses.

But if you said 12 wins, you’re making a big mistake that will cost you a lot of money in the long run. While plenty of teams start out 3-1, only a handful each year win 12 games or more.

If the team had been 4-0 to start the season, you probably wouldn’t have predicted them to go 16-0. But with anything less extreme than that, the temptation is to make the big mistake of extrapolating their record exactly into the future.

Every year, some baseball player starts out on a ridiculous tear for the first two weeks of the season, and we joke about how he’s on pace to hit 120 homeruns.

But if you had to make a futures bet on how many homeruns he would hit that year, what would be your best guess at the actual number?

Nobody in their right mind would say 120. Anything over 60 or 70 would be crazy. You’d probably expect him to have a very good season, but not a historical one. And you’d probably be right.

The force at work here is known as *reversion to the mean*. It’s a property of independent random events which dictates that after the observation of an “extreme event,” the next event is more likely to be closer to the mean than farther away from it.

We want to make a guess at a football team’s true ability to win games, in terms of winning percentage. We’ll never know exactly what that number is; randomness in the outcomes of their games limits us to making only an educated guess at it.

To better understand reversion to the mean, let’s first look at a case where randomness is the *only* factor.

Suppose we have 10,000 people flipping coins together. All flip at the same time. If someone flips heads, he advances to the next round. If someone flips tails, he’s eliminated from the game. What happens?

After the first round, about 5,000 people have flipped heads, so they’re still in the game. After the next round, only about 2,500 are left.

Fast forward ten rounds or so, and we’re left with just a couple lucky flippers who have gotten heads every single time.

If you want, you can call them expert coin flippers. But if we were to play again, would you expect those same guys to make it to end again?

Of course not. Their apparent ability to flip coins was an illusion, entirely due to luck. We’d give them the same chances as anyone else in the next game.

In other words, it’s far more likely that they’ll perform around the average next time than it is that they’ll do even better than their exceptional performance the first time. Because their performance was based on luck alone, they will revert to the mean.

[As a commenter pointed out, this coin flipping example is often used in *Fooled by Randomness* to demonstrate survivorship bias.]

When a team starts out 3-1, we’re tempted to estimate their ability to win at 75%. But is it really that high?

We don’t know for sure. Part of what we’ve observed in their 3-1 record is due to skill, part is due to randomness. It’s impossible for us to say how big a role each factor played.

It’s possible that they’re only a 60%-winning team in the long run, but that luck has been on their side so far. By the same token, it’s possible they’re a *great* team, and that their only loss was due to a few bad bounces.

However (and this is a big however), we must suspect this team of being lucky. If you’ll admit that luck plays at least a small part in the outcome of a football game—and how can you not—then we have to view the teams that start off hot with some of the suspicion with which we viewed the lucky coin flippers.

In short: **The fact that they’ve been successful makes it more likely that luck was on their side than that it was against them.** Once you accept that, then the only logical conclusion about their future performance is that they’re more likely to perform worse than they are to perform better.

In a given NFL season, you might see six, seven, or eight teams start out 3-1. A few will finish the season at 12-4. A few (maybe) will finish even better than that, but most will finish worse. And that’s reversion to the mean at work.

(For the more mathematically inclined, here’s a more precise explanation of reversion to the mean. Warning: integrals!)

There was nothing specific to winning teams about the above discussion. Reversion to the mean also happens with bad teams. A team that starts off 1-3 or 0-4 has likely been on the receiving end of more bad luck than good, and we can expect them to improve in the future.

So if bad teams aren’t as bad as they look, and good teams aren’t as good as they look, the message is simple: take the underdog early in the season. Good handicappers won’t be fooled by the early appearance of good and bad teams, but most of the public might. And large numbers of people certainly have the ability to influence the line and give too many points to the underdog.

Similarly, when making futures bets on end-0f-season wins, homeruns, touchdowns, or whatever, remember that most things will end up less extreme than they start. The lines will already account for this to some extent, so they’re very temping bets if you don’t remember that teams and players will, in general, revert to the mean over time.

As shown by the example of the homerun hitter in baseball or the team that starts off undefeated, we tend to intuitively understand reversion to the mean in the most extreme cases. We don’t predict that anyone will hit 120 homeruns or that many teams will go undefeated, for example. For this reason, I suspect that there’s less value in betting on mean reversion in extreme cases, since everyone already has a good idea that it’s hard to continue such extreme performance.

I’ll leave you with a question.

Last post I wrote about the Gambler’s Fallacy, where we established why a roulette wheel that has landed on black several times is in no way “due” to hit red because of the streak of blacks. How does this reversion to the mean not contradict this?

Be the first to answer correctly and you win…the envy of the dozens (yes, DOZENS) of Thinking Bettor readers everywhere. Come on, what more could you ask for?

]]>For example, when two events happen in sequence, we have a tendency to believe that the first one caused the second.

This is extremely helpful when it protects us from touching a hot burner a second time, or from poking around a bees’ nest. It’s not as helpful when it causes us to believe that blowing on the dice will help us to avoid rolling a seven.To a caveman, the cost of not understanding randomness was small. He might waste his time doing an unnecessary rain dance or two, but assigning causality where there was none wouldn’t kill him, the way failing to do so might.

As a result of evolution favoring the shortcuts that helped us survive, we evolved without much grasp of the concept of randomness. We’re now subject to several biases regarding random events, one of which is the subject of today’s post.

One such bias is so prevalent, it has earned the name the Gambler’s Fallacy.

When a roulette wheel lands on black five spins in a row, we want to say that it’s “due” to hit red. Similar for a coin flip that comes up tails several consecutive times; lots of us feel it owes us a few reds to get back to even.

We know these inanimate objects have no memory, and that consecutive trials are independent. But we also believe in the so-called “Law of Averages,” which tells us that deviations from the theoretical mean should even out over time.

So which is correct?

Let’s assume a coin is perfectly fair. It should come up heads half the time, tails the other half. Now let’s say we start flipping this coin and (amazingly) get 20 heads in the first 20 tosses.

My question to you is this: **Given this history and the fact that the coin is fair, how many heads should we expect in the first 1000 flips?**

A lot of people will answer “500, since over time the coin will be fair.” But that’s wrong.

When you say “500,” knowing that the first 20 flips were heads, you’re predicting that of the next 980 tosses, only 480 will be heads, while the other 500 flips will be tails.

Doesn’t sound very fair, does it?

The correct answer is that we expect 510 heads, 490 tails after 1000 tosses. Why? We start with the 20 heads. We have 980 flips to go, of which half (490) should be heads, while the remaining half should be tails. (Remember, the coin is assumed to be fair.) That leaves 510 heads and 490 tails.

Another way of looking at it: Heads has been given a 20-flip head start, we have a fair and memoryless coin, so after 1000 flips, we expect there to still be 20 more heads than tails.

Despite what you’ll hear almost nightly from a naive sports commentator somewhere, there’s no such thing as the “Law of Averages.”

What there is, however, is the Law of Large Numbers, which says that over time, the percentage by which the number of heads differs from half will go to zero. So what this law says about our coin flipping experiment is this: After a million, a billion, or a hundred trillion flips, we’d still expect heads to be winning by 20, given that it started out that way.

But in the context of such enormous numbers, 20 is a tiny percentage, and it becomes even less significant as the number of flips grows. So the number of flips that are heads converges to one half, even without “making up for” the run of 20 heads at the start.

In short: The Law of Large Numbers does not say that we should expect more tails than heads in future flips, only that the effect of the 20 heads will become negligible after many flips.

Watch and listen closely, and you’ll notice this fallacy all the time. The logic in the following examples is equivalent to assuming that if a roulette wheel has come up black a few times in a row, it’s bound to come up heads.

*You play poker and throw away every hand for half an hour because you’re being dealt trash. With every passing hand, it feels like a playable hand becomes more likely.*(Of course it isn’t. Your chances of getting a playable hand now are exactly what they were before the run-o-trash started. Same goes for flush and straight draws: Lots of misses doesn’t make you due to hit one, so you shouldn’t adjust your play when you get frustrated.)*An NFL team wins both games against a divisional opponent one season, then faces them again in the playoffs. Someone on ESPN tells us “how hard it is to win three games against a team in this league.”*(Sure, but not once they’ve already won two of them. Two wins here does not make a loss “due.” Now it’s about winning one game against a team that’s likely not as good as them. If the reasoning is that the weaker team makes adjustments, fine, but I don’t think that’s what they’re saying.)*An NHL team gets down 3-1 in a playoff series. They win the next two games. Now radio hosts tell us that they probably won’t win Game 7, because so few teams have ever come back from a 3-1 deficit.*(Same idea here. Once they’ve tied it up 3-3, another win would make it three in a row. Three playoff wins in a row is unlikely, but once two of them are in the books, a team isn’t “due” for a loss any more than they were at the start.)*Someone playing a slot machine for an hour with no luck gets up to hit the ATM for more money. When he does, he parks his wife in the seat so that nobody can get in on his machine, which is sure to start paying out after such a long drought.*(By now, you get the point. The trials are obviously independent, so it doesn’t matter how long it’s been since the machine hit.)

The sports examples are excusable if there’s a good reason why the outcomes of the games shouldn’t be independent (psychology, shifting home field, adjustments by a team that has lost several games in a row). Barring that, though, I see them as gambler’s fallacy.

There’s a statistics concept known as *reversion to the mean* which, at first glance, seems to be inconsistent with independent trials, but in fact, it isn’t. That’s the subject of a future post, and if you like what you read here, I hope you’ll subscribe to get free updates. And please, let me know in the comments what you think.

Want more math? More basics like this? Harder stuff?

Let me know. Thanks for reading.

]]>This blog isn’t about that. It’s quite possible that the math and statistics tools explored here will make you a better gambler, but that’s not really the point.

The point is to appreciate gambling, to relish the ins and outs of this mysterious and beautiful concept of randomness. And doing that starts with understanding why we gamble in the first place—even when we don’t have the best of it.

Smart people know they won’t win most casino games they play. Yet even informed bettors can’t resist trying their hand at negative expectation games like craps, roulette, and blackjack (without counting cards).

So why would a rational, calculating person willingly accept a negative-expectation proposition?

Entertainment doesn’t work as an explanation—any fun one has while gambling is entirely dependent on the money involved and based on the idea of increasing one’s wealth. Take the money away, and the fun is gone. (Ever tried playing blackjack just for giggles?)

In fact, in a classical economics framework, even a completely fair wager—one in which the payouts are in accordance with the odds and there’s no house advantage—can be shown to be irrational. That is, it’s inconsistent with the goal of maximizing expected utility. Read on.

The problem starts with the notion of utility.

Utility is simply a measure of how good you feel. It’s what everyone goes through life attempting to maximize. And while utility is assumed to increase with wealth, the relationship isn’t linear.

Instead, utility in classical economics is assumed to increase as wealth increases, but by less and less for larger values of wealth. The curve plotting utility against wealth is concave-down. (Depending on your background, it may help to think of this property as “diminishing marginal utility,” or as a “negative second derivative of utility with respect to wealth.”)

As a simple example of decreasing marginal utility, consider which is worth more: Your first hundred dollars of wealth, or your first hundred dollars *on top of your first million*?

Your first hundred dollars of wealth has nearly infinite worth to you.

Right? If you only have 100 dollars to your name, that 100 dollars is what keeps you alive for the next few days (assuming you don’t have access to loans).

Contrast this with the first hundred bucks one earns after making a million. Sure, that extra hund-o might afford you a nice bottle of wine, which certainly enhances your utility. But compared to your survival, that wine is all but worthless.

Now, let’s think about it from the perspective of a gambler. When you make a straight-up, 50/50 bet on a coin flip, you’re risking one dollar to lose one dollar. But because of diminishing marginal utility of wealth, that dollar you might win is worth *slightly less* than the one you could lose, in terms of how you feel about it.

In this framework, even a fair bet with a friend is a bad one. Your expected win is exactly $0, but your expected change in utility is slightly negative. If your utility function is concave-down, you don’t take this bet.

Put another way, someone whose utility function has this shape is *risk-averse*: Given the choice between a fixed level of wealth and a gamble, *even if that gamble is fair* *or slightly favorable*, someone who is risk averse prefers the sure thing.

The basic assumption that all people have concave-down utility functions, then, implies that gamblers, or risk-seekers, are irrational. Given that the expected utility of even a fair gamble is negative, it must be that gamblers are not maximizing expected utility and are idiots. That’s just what Daniel Bernoulli, the first person to think about this question in depth, concluded.

If gambling is irrational, then a huge contingent of the world is irrational, from the compulsive gambler to the once-a-year Vegas-vacationer, to the old couple on the NYC to Atlantic City bus to play slots for the day.

Irrational. Really?

Economics is supposed to explain the behavior of lots of people, not just the ones who don’t gamble. Here the model fails. As a result, modern economists are proposing new models to explain gambling behavior.

Some alternative theories deal with the problem by allowing a locally-convex (the opposite of concave) utility function near the current level of wealth, so that dollars gained are worth more than dollars lost, at least when it comes to fluctuations around the current wealth level. That is, when the monetary risks are small, the many people seek out risk, rather than avoiding it.

Such theories are unsatisfying, in that they make no effort to explain *why* utility functions might be locally convex (i.e., why gamblers seek out small risks). For this, one needs to examine psychological factors that make gambles appealing. Some propose that gamblers believe they are “lucky,” despite understanding on an intellectual level that the laws of probability are against them. This “gambler’s illusion” is the subject of Joseph Mazur’s new book, *What’s Luck Got to Do With It?** *

There’s still no universally-accepted explanation for why we gamble, but that doesn’t mean we can’t use our understanding of the problem to enjoy gambling more.

The next time you gamble, whether it’s a fair bet with a buddy or you’re paying a premium in a casino, realize what’s going on. You’re behaving in a way that a few hundred years of economics still can’t explain. There’s something in a wager that brings you value, beyond simple expected utility of wealth, yet still somehow rooted in the chance to improve your lot.

When you’re in the casino, you don’t have to stick to the “smart bets.” Nobody’s forcing you to bet the pass line on the craps table, taking maximum odds every time (since as we all know, odds are the only fair bet in the whole place). Nor do you have to grind it out at blackjack, following basic strategy to the letter, just so you can reduce the house edge as much as possible that way.

If you’re playing those games in the first place, then there’s something alluring to you besides expected profit or utility of wealth. So play them in the way that’s most fun for you.

That’s why you’re there. Even losing becomes more fun when you can stand back and marvel at why you took the risk in the first place.

And when someone tells you you’re making stupid bets, remind them that even if they could play with no house edge, they’d be irrational to do so, in the eyes of classical economists. There’s something more to it, something about the enjoyment of a gamble.

So gamble in the way that you enjoy most.

*This blog is brand new, and your feedback will help me determine what direction to go with it. If you liked this post (or even if you didn’t), leave a comment to let me know. If you’d like to get future posts delivered to your inbox or reader, click here to subscribe to Thinking Bettor.*