Deal or No Deal

#21
beadman said:
When a offer is made by the banker. say 5 cases left 100,000 200,000
$5 $1 and 500,000 and the offer is 130,000. I would say add the total cases 800,006 and divide by 5 = 160,001 and if the sum is greater then offer you can take it if it is lower you should never take it.
Statistically, this has got to be right. If the offer is more than the average of all remaining cases, take it. If the offer is less than the average of all remaining cases, reject it.
HOWEVER, this doesn't take into account the "risk aversion" that I was referring to above.... i.e., a person may take less than the average of all remaining cases if there is A CHANCE he may end up with $1. A person may refuse to take more than the average of all remaining cases if he still has "a shot" at $1 million or $750,000. Depends on the individual....
 

justme

homo economicus
#23
I can't imagine the bank ever offering the expected value. Has it ever done so? I would imagine that it always would offer considerably less. NBC has a webpage where you can play the game yourself. Obviously there's a banker algorithm for that game. I wonder if it's similar to the one used in the show.

I played the webgame for a while. It seems like the bank tends to offer about half the expected value.
 
#24
justme, I think that you are right, that the bank always offers LESS than expected value... but I'm not sure how it decides HOW MUCH LESS... Seems pretty random to me.

(perhaps it decides based on contestant reaction? (playing poker, as slinky alluded to) i.e., trying to choose a number that will be the HARDEST choice in the contestant's mind? just a guess, but that would be more interesting than some sort of random variation below the median (or mean)....
 

Slinky Bender

The All Powerful Moderator
#26
How about this for a strategy: you'd have to look to see if the bank has been offering a consistent amount of the remaining cases value regardless of what the contestant has. IF the bank knows what the contestant has, wouldn't there be a good chance that he would alter his bids to the low side (knowing that he didn't want the guy to take any deal) if it was a low money case? Obviously it couldn't be too low, or the show would be killed. But if you could look back at what has been offered, and determine this was the case (get it, "case"?) then a good strategy would be to see which way the offers were coming in, and if they were the "high one's", turn down the offer (since you have a good money case) and if they are the "low one's" take the offer (since you have a bad money case).

In other words, if you examined the history of offers, and found that when the contestant has a low money case, the bank ofered 50% of the mean of the remaining cases, and if the contestant had a high money case, the bank ofere 75% of the mean. You could see which type of offer you were getting, and if it was 50%, it would be a "tell" that you had a low money case, and if it was 75% it would be a "tell" that you had a high money case.
 

justme

homo economicus
#27
I suppose that would work if:

1. The bank had foreknowledge of the case (or rather, the bank's algorithm had foreknowledge)

2. The bank used a consistant algorithm like 50% - 75%.

Did you ever find a site with raw data for the game?
 

justme

homo economicus
#28
I wonder if the producer's of the show have insurance (in the form of derivitives) on the payoffs. Then the network wouldn't care how much the player won.

But the issuer of the derivitive (who, theoretically, would be much more able to effectively play the 'banker') would.

How much fun would a two player variant be in which the bank was a different player whose payoff was negatively correlated with the case-picker's payoff.
 
#30
justme said:
I can't imagine the bank ever offering the expected value. Has it ever done so? I would imagine that it always would offer considerably less. NBC has a webpage where you can play the game yourself. Obviously there's a banker algorithm for that game. I wonder if it's similar to the one used in the show.

I played the webgame for a while. It seems like the bank tends to offer about half the expected value.

depending on the cases left when say there is 5 left I have seen offers that exceed the " expected value ".
 

justme

homo economicus
#31
That may make some sense, actually. now that I think about it. If you get down to the last few cases with less-than-expected value offers, the player has already demonstrated that they're at least more risk seeking than the bank thinks the average person is. If the bank itself is risk averse, it might have to change its strategy to reflect that.

That is, a risk averse bank may use some gambling strategies to predict that the player will be more risk averse than it is, but if the player reveals that to be untrue, the bank may have to compensate with a deal that's worse for it than the expected value.
 
#32
Don't forget the goal is entertainment, and that enters strongly into the bank decisions. I'd bet the rent they select contestants most likely to let it ride as long as possible to keep the suspense up. They may even give them some form of personality test for this purpose.

My mother is a case in point of the contestant they don't want. She's always ranting about how the contestants should take the offer, take the offer. They wouldn't get much suspense from her.

In the U.S. show, is the $1 million paid as a lump or annuity?
 
#33
All I know is I watch the show with my kids and it gives them a good idea of probability, fractions, percentages, and how stupid some people get when they are on TV.

Then again, based on jm's posts, I am risk averse.
 

justme

homo economicus
#34
Most people are risk averse. The standard economic assumption is that utility functions are concave increasing. That is, people like more money better than less money, but the last dollar is worth less than the first.
 

Slinky Bender

The All Powerful Moderator
#37
But basically what he's saying is that as the potential for size of loss increases, people become more averse to that risk.

For example, let's say we are talking about flipping a coin, but it's a weighted coin so that it's 60% heads and 40% tails. You'd bet $1 a flip on heads all day long, right? But how about 1 flip for your car, your house, and your IRA?
 
#38
Just Me,

Maybe you can explain this, but it seems like the calculation in the DorND game at any given time is not whether or not the odd weighted bank offer is "fair market" or not (i.e. what is the size of the house slice) but what the "cost" to the player is (i.e. a certain bank offer) against the possible next move. Again, if the player could play infinite times then in any given situation the rational player would fold once the house position was minimized. But with this game they only get to play once. Any help in explaining how this works in terms of game theory? Thanks,
 

justme

homo economicus
#39
It hadn't occured to me to think of this in a game theoretical framework. It is a (very) finate game and so we could just write down the decision nodes (the possible outcomes) which are fixed regardless of either player's play. Howver I think getting to a subgame perfect Nash equilibrium would be hard.

(A SPNE is a pair of strategies for a player and bank such that neither player can change his strategy without possibly doing worse in each subgame. The best way to thoink about it is that the strategy pair is optimal at all the last decision node. So you know what each player will do last. Knowing this, you can plot your second to last move. But now you know how each player's last two moves will work. So now you can plot your third to last move... etc.)

SPNE are of questionable value because they may not be 'optimal' in some sense. Moreover, applying game theory to this would be a mess, I think.

Let's take a really dumbed down version of the game. In my version there are only two suicases and three moves.

1. You pick a case
2. I make you an offer
3. You chose

This is actually the last possible move of the regular game, so looking at it is necessary to analysing the regular game anyways.

Now the first move is completely stochastic and has no strategy implications. In fact, we could replace it with a coin flip or whatever. So the bank here has the first move. It's goal would be to offer the least amount of money that is less than it's certainty equivalent for the negative (relative to the bank) lottery it's about to play and it believes I will take. Since there will be no more moves, the bank can assume that my strategy will be to take the offer if it is greater than my certainty equivalent for the positive (relative to me) lotteries.

If the bank knew my utility function, it could do this perfectly. It would offer me 1 cent over my certainty equivalent and I'd take it (if I were behaving rationally). Then, it would have 'won' the difference.

The bank doesn't know my utility function, however. In my simplified game, the bank knows nothing about my utility function. It's offer has to be optimal against all of my strategies (but we already know that my strategy is determined by my utility). So the bank would have to do some crazy integration over the weighted population of utility functions. If it knew nothing about the population of utility functions, it would have to assume a uniform distribution. In that case it's only optimal strategy (and this is terrible for the bank) would be (I think) to offer a penny below it's certainty equivalent.

Now, you argue, in the real game the bank does have some knowedge of my utility function. If we get to the last 2 suicases, it's made a few (8 or so?) offers already. So it has some data points about my preferences over past lotteries. While it technically might be possible to get some information, I think the problem is really, really hard. Maybe I'll bounce it off of a few friends that are better than me at this stuff...
 
#40
justme said:
If it knew nothing about the population of utility functions, it would have to assume a uniform distribution. In that case it's only optimal strategy (and this is terrible for the bank) would be (I think) to offer a penny below it's certainty equivalent.
So, to put this in a language that hopefully I can understand...
Taking your simplified example above (where the bank knows nothing), if the three cases held $1 million and $1, then what would the bank's only optimal offer be?


To me, this stark example of $1 million and $1 really brings to light how PLAYER DEPENDENT optimal strategy really is. (or looked at another way, bank dependent). The example could well be $50 and $1.

The difference between winning $750,000 and $1,000,000 is much less than the difference between winning $1 and $250,001, even though the dollar difference is the same.
 
Last edited:
Top