21 page 40
(h) Methods of applying the above axioms.
We assume that probability is a measure of the degree of belief that one ought to have, given certain evidence about an event, and that it satisfies the axioms given above. In order to apply the theory one must be able to judge that two events are equally probable, or at least sufficiently nearly so for all practical purposes. For example, if there was a barrel containing 10,000 ordinary pennies and one double-headed one, all thoroughly mixed up, we should judge that a coin chosen at random would have an equal probability of being any of the coins. Therefore by the law of addition it follows that the probability of drawing the double-headed coin is 1/(10,001), or odds of 10,000:1 against, i.e. 10-4. In the example of decibanning given above the coin might have been chosen in this way. This is quite a good analogue of the sort of thing done in cryptographic problems, namely one looks for needles in haystacks and the object chosen has to have a large factor in favour of being a needle in order to overcome its prior odds. (It will be observed that one would take a long time to find the needle if one could not estimate the factor very quickly - hence the necessity of machines in such problem.)
Another thing that is often necessary in practice is to make a probability judgement of the type that a certain probability lies in a rather large interval. For example if a man produced a coin and began to toss it, you may be able to judge by his manner and some half-remembered facts that the probability of its being a double-headed coin must lie between 1/(million) and 1/(100). If no such judgement were possible you could never assert that you believed the coin to be double-headed, even if it came down heads 100 times running.
(i) Theorem of the weighted average of factors.
Suppose that a number of unreliable witnesses each says that a certain event E has happened but it is known that one and only one of them has in fact seen the evidence. Let the probabilities that the witnesses have seen the evidence be p1, p2, ... and the factors in favour of an hypothesis H be ƒ1, ƒ2, ... respectively. Then the resulting factor is Σpiƒi.
As a special case suppose that an experiment is done and it has a probability p of having been done correctly, in which case it contributes a factor ƒ to a certain hypothesis. If it is done incorrectly it supplies no evidence, i.e. a factor of 1. Then the resulting factor is pƒ + 1-p. This special
Back to General Report on Tunny. Contents.