I am a super-mega-dork, because I stayed up until stupid o'clock last night working on a math problem.
Technology Review (which all MIT alumni get) has a puzzle column, and I was reading it last night and got sucked in to the raquetball problem, which goes like this:
In racquetball, if the server doesn't score a point, the other guy gets to serve. Whoever gets 15 points first wins. If the score is 13 to 14, the guy with 13 points is serving, and we assume that whoever serves has probability p of scoring a point, what does p have to be for it to be more likely that the guy with 13 points will win the game?
The answer is, "a little bit more than 55% or greater", and you totally don't care where that comes from, but I feel like explaining it anyway. Luckily for you, I'm nice and will hide it behind a cut rather than inflict it upon you involuntarily.
The thing that makes this tricky is the fact that if the server doesn't score a point, the serve passes to the other player, and it can go back and forth over and over as long as nobody scores a point. Figuring out how this translates into numbers (well, letters) is easier if you draw a little diagram showing the different possible states, like this:
Notation: (13)/14 means player A has 13 points and player B has 14, player A is serving; the * indicates a final state where someone has won, and the arrows show the transitions between states with the probability of that transition.
So we can move from the initial state to 13/15* by going qp, or qqqp, or qqqqqp, and so on. We can get to player A winning by pp, or by qqqqpp, or pqqp, and so on. Recall that probability of X and Y = P(X and Y) = P(X)*P(Y), and P(X or Y) = P(X) + P(Y). Thus:
P(Player A wins) = (p + q2p + q4p + ...) · (p + q2p + q4p + ...)
and
P(Player A loses) = (qp + q3p + q5p + ...) + p (qp + q3p + q5p + ...)
Which we can rewrite as:
Condition for A winning = [p ∑n=0∞(q2)n]2 > pq(1+p) ∑n=0∞(q2)n
Now, since |q2|<1 (q is a probability, hence between 1 and 0), the sum is a geometric series, and it's equal to 1/(1-q2). Which gives us for the win condition:
p/(1-q2) · p/(1-q2) > pq(1+p)/(1-q2)
Cancel out a p/1-q2, multiply by 1-q2, and you can rewrite that as
p > q (1+p)(1-q2)
Now substitute back in q = (1-p), do a lot more algebraic rearrangement, and you end up with
p3 - 2p2 - p + 1 < 0
Does that give us an answer? Well, it's a cubic, so there's at least one real root, and p=0 gives us 1, while p=1 gives us -1, so it must cross zero somewhere between the two, which means there's a solution. Awesome!
Now, at this point, we could look up and plug numbers into the cubic formula, which is incredibly hairy, but it's much easier to just throw numbers into a spreadsheet real quick or google a cubic equation calculator, both of which will give you in short order a value of something like 0.55496. So player A is more likely to win than lose if p is greater than 0.55498. Hurrah!
I also found a covering of the chessboard with 12 knights, but I can't prove that's the minimum number you need, though I'm pretty sure it is.
EDIT: I found a site that has knight coverings! Optimality proofs down at the bottom.
Technology Review (which all MIT alumni get) has a puzzle column, and I was reading it last night and got sucked in to the raquetball problem, which goes like this:
In racquetball, if the server doesn't score a point, the other guy gets to serve. Whoever gets 15 points first wins. If the score is 13 to 14, the guy with 13 points is serving, and we assume that whoever serves has probability p of scoring a point, what does p have to be for it to be more likely that the guy with 13 points will win the game?
The answer is, "a little bit more than 55% or greater", and you totally don't care where that comes from, but I feel like explaining it anyway. Luckily for you, I'm nice and will hide it behind a cut rather than inflict it upon you involuntarily.
The thing that makes this tricky is the fact that if the server doesn't score a point, the serve passes to the other player, and it can go back and forth over and over as long as nobody scores a point. Figuring out how this translates into numbers (well, letters) is easier if you draw a little diagram showing the different possible states, like this:
(13)/14 <-- q --> 13/(14) -- p --> 13/15*
\
\
-- p --> (14)/14 <-- q --> 14/(14) -- p --> 14/15*
\
\
-- p --> 15/14*Notation: (13)/14 means player A has 13 points and player B has 14, player A is serving; the * indicates a final state where someone has won, and the arrows show the transitions between states with the probability of that transition.
So we can move from the initial state to 13/15* by going qp, or qqqp, or qqqqqp, and so on. We can get to player A winning by pp, or by qqqqpp, or pqqp, and so on. Recall that probability of X and Y = P(X and Y) = P(X)*P(Y), and P(X or Y) = P(X) + P(Y). Thus:
P(Player A wins) = (p + q2p + q4p + ...) · (p + q2p + q4p + ...)
and
P(Player A loses) = (qp + q3p + q5p + ...) + p (qp + q3p + q5p + ...)
Which we can rewrite as:
Condition for A winning = [p ∑n=0∞(q2)n]2 > pq(1+p) ∑n=0∞(q2)n
Now, since |q2|<1 (q is a probability, hence between 1 and 0), the sum is a geometric series, and it's equal to 1/(1-q2). Which gives us for the win condition:
p/(1-q2) · p/(1-q2) > pq(1+p)/(1-q2)
Cancel out a p/1-q2, multiply by 1-q2, and you can rewrite that as
p > q (1+p)(1-q2)
Now substitute back in q = (1-p), do a lot more algebraic rearrangement, and you end up with
p3 - 2p2 - p + 1 < 0
Does that give us an answer? Well, it's a cubic, so there's at least one real root, and p=0 gives us 1, while p=1 gives us -1, so it must cross zero somewhere between the two, which means there's a solution. Awesome!
Now, at this point, we could look up and plug numbers into the cubic formula, which is incredibly hairy, but it's much easier to just throw numbers into a spreadsheet real quick or google a cubic equation calculator, both of which will give you in short order a value of something like 0.55496. So player A is more likely to win than lose if p is greater than 0.55498. Hurrah!
I also found a covering of the chessboard with 12 knights, but I can't prove that's the minimum number you need, though I'm pretty sure it is.
EDIT: I found a site that has knight coverings! Optimality proofs down at the bottom.
no subject
Date: 2006-09-10 11:49 am (UTC)Covering the chessboard is one of those things that a constraint satisfaction engine is really the right thing for, I think.
no subject
Date: 2006-09-10 02:31 pm (UTC)no subject
Date: 2006-09-10 03:07 pm (UTC)i am not reading your post, but rather am trying somewhat idly (and presumably continually less idly throughout the day) to solve your little puzzle.
ditto the knights thing, on which I think I might be close to tightening to lower bound from 8 to 10.
no subject
Date: 2006-09-10 04:04 pm (UTC)no subject
Date: 2006-09-10 04:21 pm (UTC)So that is to say, there's a more general way that I would solve the problem, and understanding it can be very helpful. (In a domain I didn't know well, we used this kind of approach a while back to compute the probabilty of a certain kind of evolutionary network occurring in the most simple model of evolution.)
That said, your answer is a good way of explaining it if you're willing to talk about infinite series, but not Markov chains, which probably describes a much larger audience.
It occurs to me (and I'm thinking about this right now in another window...) that you could use this style of approach to answer a question I wanted to answer around 15 years ago, which is to determine the optimal strategy for fighting battles in Risk, and computing the victory probabilities.
no subject
Date: 2006-09-10 05:42 pm (UTC)So, let's see, if I were following the standard formalism, I'd use the diagram to make a transition matrix. If I number the states in the diagram 1-7 going in read order, I get something like this:
Is that right? What do you do with the inversion to get the solution?
no subject
Date: 2006-09-10 05:59 pm (UTC)And, really, you don't care if the final score was 13-15 or 14-15, so join those two states together.
Now, let's call this matrix A. The initial probability distribution is the column vector v = [1,0,...,0]': that is, we always start in state 1. (True Markov chains can allow for uncertainty of the initial state.) A*v tells us the probability distribution after one step. A*A*v tells us the distribution after two steps. A1000000*v tells us what we really want to know, which is "what is the probability distribution that tells us where the chain is afer we've let it go for a really long time?" [Now it should be clear why we wanted those 1s on the diagonal in the transition matrix.]
So we essentially want to know what A∞*v looks like. I always get the formula wrong, but I think this is something like (I-A)-1*v, where I is the identity matrix. It's in a book on my shelf at the office...
no subject
Date: 2006-09-10 06:01 pm (UTC)no subject
Date: 2006-09-10 06:36 pm (UTC)Neat! Thanks!
no subject
Date: 2006-09-10 06:39 pm (UTC)no subject
Date: 2006-09-10 07:29 pm (UTC)*cough* *coughsuper-mega-dorkcough* *cough*
--G
no subject
Date: 2006-09-10 11:14 pm (UTC)no subject
Date: 2006-09-11 04:23 am (UTC)assumption would be 13:14 (the score so far) instead of 1:1. I'd also weight the
probabilities that a point would be scored in 1, 2, 3... rounds asymptotically.
But then again I'm a geek-dweeb-nerd. Or a mathochist.
no subject
Date: 2006-09-11 01:45 pm (UTC)no subject
Date: 2006-09-11 01:54 pm (UTC)Why adjust the probabilities in subsequent rounds? If you have a constant probability for server scoring a point, you get an exponential decay in the distribution of round lengths.
no subject
Date: 2006-09-11 04:52 pm (UTC)no subject
Date: 2006-09-15 04:40 am (UTC)Meany.
no subject
Date: 2006-09-15 08:32 pm (UTC)Sorry, hope I didn't spoil it for ya.