# Economic Decision Theory

Many states have lotteries that raise money to fund education. This is ironic. A colleague told me that he considered the state lottery to be a tax on stupid people. Let's do the math.

We posit a lottery that sells tickets for \$1 each and has a jackpot payout of \$6,000,000. The odds of winning are one in 10 million. (The lottery may also have smaller payouts for matching fewer numbers, but we will treat those as immaterial for this analysis.)

If you buy a single ticket, your chance of winning \$6 million is 1 in 10 million. If you buy 2 tickets, your odds double to 2 in 10 million. In fact, if we follow this line of reasoning, we can greatly improve our odds by buying all 10 million possible tickets! This is no longer a game of chance. We know for sure that we will win the lottery! We will win \$6 million! However, we will have invested \$10 million. Thus, we are sure to lose \$4 million. The guaranteed value of the strategy is a whopping loss of \$4 million. Don't do this at home.

Imagine that the lottery is weekly and dynamic, such that if no one wins the jackpot on a given week, that amount is added to the next week's jackpot - with no change in the odds of winner. So if the week 1 jackpot is \$6 million and no one wins, then the week 2 jackpot will be \$12 million. At that point we can spend \$10 million to purchase all possible tickets, confident that we will win \$12 million, for a net profit of \$2 million. Not too shabby. The guaranteed value is a gain of \$2 million.

Let's go back to the case of buying a single ticket. Our cost is one dollar. The odds of winning are 1 in 10 million. That is, we will win \$6 million once every 10 million times. This may seem like an odd question, but what is our average winnings? Since we have eliminated partial payouts for non-jackpot tickets, we get nothing for 9,999,999 of the trials. However we still get \$6 million when we hit the jackpot. The average is then \$6 million / 10 million trials, which comes out to \$0.60 per trial. This average is know as the expected value. That is, we are paying one dollar for a ticket whose value is \$0.60. Hence, the tax on stupid people. The behavioral psychologist, B.F. Skinner, believed gambling to violate rules of rational behavior, suggested that states could more efficiently exploit this human frailty and do away with taxes. (see reference)

How do we know if a bet is irrational or not? Imagine the prior case where the jackpot, if not collected, rolls over to the next week. In such a case, the jackpot is not \$12 million, while the odds remain at 1 in 10 million. What is the expected value of a one dollar ticket? We again calculate the average. We divided \$12 million by 10 million trials and get an expected value of \$1.20 for a one dollar ticket. In this case, the expected value exceeds the cost of playing. The bet is no longer irrational. (We need to make the further assumption that in the case of multiple winners, the jackpot will not be pro rated, but rather that each winning ticket will get the full jackpot amount.) (see Ellenberg reference for a practical example involving MIT students and the Massachusetts state lottery.)

Expected value provides a valuable decision metric in cases where probabilities are present. Consider the following dice game.

We have a normal 6 sided die with the numbers 1 through 6 on the respective faces. You throw a single die. If the number n on the top face is odd, you pay n dollars. If the number is even, you receive n dollars. For example, if you throw a 3, you lose \$3. If you throw a 2, you win \$2. Do you want to play this game? If so, how much would you be willing to pay for each round?

We want to calculate the expected value of playing a round of this game. We can summarize all possible outcomes in the following table.

 Die Odds Payout Odds * Payout 1 1 / 6 -\$1 -\$0.17 2 1 / 6 \$2 \$0.33 3 1 / 6 -\$3 -\$0.50 4 1 / 6 \$4 \$0.67 5 1 / 6 -\$5 -\$0.83 6 1 / 6 \$6 \$1.00 Total 1 \$0.50

We are assuming that this is a fair die, that is, that each outcome has the same probability of occurring, exactly one in six. We note that the sum of these probabilities is one, which is another way of saying that it is certain that throwing the die will result in one of these outcomes and no other possibility. The expected value calculation is the sum of the products of each probability with its respective payout. In this case, that sum is 50 cents. Thus, we should be willing to pay up to \$0.50 to play this game. If the price is over \$0.50, it becomes a tax on stupid people.

Sometimes the decision is not simply to play the game for how much, but rather another type of choice Imagine that you are an executive and your accounting software is getting old. You have three choices: buy a third party program, build your own accounting program from scratch, or make do with what you have. For each of these options, there are costs, benefits, and the likelihood of success, per the following table.

 Option Project Cost Benefit of Success Likelihood of Success Expected Value Buy new \$2,000,000 \$4,000,000 40% -\$400,000 Build new \$2,000,000 \$3,000,000 50% -\$500,000 Retain legacy \$100,000 \$0 100% -\$100,000

As we did with the dice game, we can calculate the expected value of each option. In this case, the expected value is actually an expected cost for each option. The smallest cost is staying with the existing legacy software. (Note: it is well known that applying traditional capital budgeting models to IT investments is notoriously ill-advised. A firm which strictly followed those models would rarely if ever invest in new technology. More recently, people have applied option pricing models to IT investments. That is, investing in IT gives the firm options that may payoff down the road. Companies that bought desktop computers and local area networks in the 1980's were better able to exploit the internet and world wide web in the 1990's. Next week we look in more detail at capital budgeting and net present value calculations.)

The astute reader might ask where do we get these numbers, such as the benefits of success and the likelihood of success? The answer here is that we made them up. Unfortunately, that is often the case in real life. The expected value approach adheres to H.L. Mencken's aphorism of being neat, plausible, and wrong.

There are domains in which decision analysis is effective, particularly in games of perfect information, such as chess and checkers. Here the players take turns. Each player know exactly what options her opponent has on the next move. In fact, the player can calculate all possible responses to all possible moves, and identify which moves can lead to a guaranteed win. Theoretically, the player can perform this analysis indefinitely. However, there are practical limitations to this analysis.

Before looking at the limitations, we can view the possible. Consider tic-tac-toe. The first player has nine possible moves. Her opponent then has eight possible moves. The number of possible moves decrements on each round or ply of the game. Thus, we can calculate precisely the total number of possible tic-tac-toe games: 9 * 8 * 7 * 6 * 5 * 4 * 3 * 2 * 1, which is 9 factorial (9!) or 362,880. The actual number is smaller since most games end before 9 moves.

Now we turn to chess. The first player, white, has 20 possible moves - advance one of the eight pawns either one or two squares, or move one of the two knights forward left or right. Similarly, her opponent, black, also has 20 possible moves. The number of moves is known as the branching factor. The total number of board positions after these initial moves is simply the product: 20 * 20 = 400. During the game, the branching factor will change depending on the board position. It might be greater than 20 or less than 20. Also, unlike tic-tac-toe, it is possible to return to an earlier board position in chess. Below is a table which shows the number of possible board positions after each round, assuming a branching factor of 20.

 Round Positions Seconds Years 1 400 0.0004 1.26752E-11 2 160000 0.16 5.07009E-09 3 64000000 64 2.02804E-06 4 25600000000 25600 0.000811215 5 1.024E+13 10240000 0.324486019 6 4.096E+15 4.1E+09 129.7944077 7 1.6384E+18 1.64E+12 51917.76307 8 6.5536E+20 6.55E+14 20767105.23 9 2.62144E+23 2.62E+17 8306842092 10 1.04858E+26 1.05E+20 3.32274E+12 11 4.1943E+28 4.19E+22 1.32909E+15 12 1.67772E+31 1.68E+25 5.31638E+17 13 6.71089E+33 6.71E+27 2.12655E+20 14 2.68435E+36 2.68E+30 8.50621E+22 15 1.07374E+39 1.07E+33 3.40248E+25 16 4.29497E+41 4.29E+35 1.36099E+28 17 1.71799E+44 1.72E+38 5.44397E+30 18 6.87195E+46 6.87E+40 2.17759E+33 19 2.74878E+49 2.75E+43 8.71036E+35 20 1.09951E+52 1.1E+46 3.48414E+38

The third column, labeled Seconds, indicates how many seconds would be required to process all possible board positions, assuming that you have a computer capable of analysing a million positions a second. The fourth column, labeled Years, is simply the seconds column converted to the corresponding number of years, assuming 60 seconds per minute, 60 minutes per hour, 24 hours per day, and 365.25 days per year. Under these assumptions, a complete analysis of all possible 5 round games would take this computer nearly four months. The round 6 analysis would take 129 years. Astute readers will note that the computational complexity of chess cannot be completely addressed by throwing more machine power at the problem. If instead of one computer analysing a million positions a second, let us harness a million computers, each with the power to analyze a million positions a second. This improvement buys us roughly two more rounds. A complete analysis of round 8 now will take only 20 years or so.

This inexorable growth is known in computer science as a combinatorial explosion. Other disciplines often ignore this problem, and pose solutions that blithely assume infinite computational capacity. Philosophers will pose thought experiments that require the examination of infinite possible worlds. The efficient market hypothesis in finance assumes that stock prices reflect all possible information relating to that stock. Herbert Simon, the pioneering computer scientist, artificial intelligence researcher, and economist, won the Nobel Prize in Economics based in part on his theory of bounded rationality which acknowledged the practical limitations of computation power in making decisions. Simon knew that it was impossible for a computer to play a perfect game of chess - that "solving chess" was not possible.

Wait a minute - don't computers play chess at the top level? How can they do that if they don't analyze all the positions? Computers use heuristics that permit them to avoid searching all possible positions. In the game playing parlance, they can prune the search tree. In fact, there is a specific game playing heuristic called alpha-beta pruning. It was invented or discovered by Art Samuel, who was an executive at IBM in the 1950's who created a computer checkers program in his spare time, using idle cycles on IBM machines. (see reference) Samuels program was a magnificent achievement. Not only did Samuel use pruning, he also implemented machine learning, by having the program remember past board positions from numerous simulated games. His program was brilliant! Possibly the only available idea Samuel failed to employ was to give his program a name. It is always known as Samuel's program. It is clearly an example of a computer program that makes decisions without human intervention.

Current computer chess programs lose from time to time against human opponents, indicating that the programs are not infallible. Imagine that you have an undefeated chess program. How could you make it lose?

## References

J. Ellenberg, How Not to be Wrong.

Penguin Press, 2011.

Freedom, at Last, From the Burden of Taxation

B.F. Skinner, New York Times, July 26, 1977

Some Studies in Machine Learning Using the Game of Checkers. II-Recent Progress

A.L Samuel. IBM Journal, November 1967.

## Homework

hw1: data driven blackjack