Thursday, December 13, 2007

Santa Claus does too exist!

Contrary to the assertion of the scandalous propaganda on the left, Santa Claus does indeed exist. And I can prove it. Start with proposition A:
A. If A is true, then Santa Claus exists.
Now, suppose A were true. Then it would follow that if A is true, then Santa Claus exists, and again since we're supposing A is true, it would follow that Santa Claus exists. So we've shown that if A is true, then Santa Claus exists. But that is proposition A, so we've proven that proposition A is true. So that means that Santa Claus exists! (A remarkable conclusion given my recent post on things that probably don't exist.)

The only trouble is that the reasoning above lets you prove anything (e.g. that penguins rule the universe). It's an example of Curry's paradox, which can't be easily explained away, and is the subject of ongoing research by logicians.

Bah humbug!

Still not convinced? Thomas Aquinas to the rescue! Well, actually, his modern admirers. Aquinas came up with 5 ways of proving the existence of God. Dr. Joseph Magee, a Thomistic scholar, has used similar methods to develop 5 ways of proving the existence of Santa Claus. For example:
The fourth way is taken from the grades which are found in Christmas spirit. Indeed, in this world, among men there are some of more and some of less Christmas spirit. But "more" and "less" is said of diverse things according as they resemble in their diverse ways something which is the "maximum." Therefore there must be something which has the most Christmas spirit, and this we call Santa Claus.
I would question, however, the implicit assumption that it's a man who fits the bill.

Visions of sugarplums

If you think I'm just trying to flatter the jolly old elf so as to garner more loot this Christmas, well ... keep quiet about it, would ya?

Labels: , , , , ,

Bookmark and Share

Monday, December 10, 2007

log base 2

It turns out that the number 1 reason people visit this blog is to calculate log base 2 of an integer. So here is log2 of 1 through 10, to 16 digits precision:

log2(1) = 0
log2(2) = 1
log2(3) = 1.584962500721156
log2(4) = 2
log2(5) = 2.321928094887362
log2(6) = 2.584962500721156
log2(7) = 2.807354922057604
log2(8) = 3
log2(9) = 3.169925001442312
log2(10) = 3.321928094887362

Note that log2(x) is defined for any x greater than zero. If you have a calculator than computes the natural logarithm (often denoted ln), then you can calculate log2(x) = ln(x)/ln(2). The same thing works with log base 10, i.e. log2(x) = log10(x)/log10(2).

But what does it mean?
log2(x) means the power you have to raise 2 in order to get x. For example, 22 = 4, so log2(4) is 2. Similarly, 23 = 8, so log2(8) = 3. It turns out that 21.58496 is very nearly 3, so log2(3) is roughly 1.58496.

Some cases deserve special mention. log2(2) = 1 because 21 is 2. log2(1) = 0 because by mathematical convention 20 = 1 (this holds not just for 2, but for any base). Finally, note that log2(0) is undefined, although some software will return -Infinity (which is the limit of log2(x) as x approaches zero).

What is it used for?

The logarithm is useful for a variety of purposes. One of the more common is when describing exponential growth or decay. For example, the time for a radioactive substance to decay to half its mass is called the half life. Similarly we can describe accelerating growth in terms of the doubling time. I previously applied this to the number of blogs tracked by Technorati.

In computing, log2 is often used. One reason is that the number of bits needed to represent an integer n is given by rounding down log2(n) and then adding 1. For example log2(100) is about 6.643856. Rounding this down and then adding 1, we see that we need 7 bits to represent 100. Similarly, in order to have 100 leaves, a binary tree needs log2(100) levels. In the game where you have to guess a number between 1 and 100 based on whether it's higher or lower than your current guess, the average number of guesses required is log2(100) if you use a halving strategy to bracket the answer.

Two much of nothing

Although I can't provide additional help to people with logarrhythmias, I hope this note is of some assistance.

Labels: , , , , , ,

Bookmark and Share

Sunday, December 09, 2007

Things that (probably) don't exist

A recent article by philosopher Steven Hales is titled "You Can Prove a Negative" (a slightly different version of the article is available as a pdf file). Hales argues that the "principle of folk logic" saying you can't prove a negative is just plain wrong.

He points out that "any claim can be expressed as a negative, thanks to the rule of double negation." So it's easy to come up with examples of proving a negative. Hales goes on to say that "Some people seem to think that you can’t prove a specific sort of negative claim, namely that a thing does not exist." He counters this with an example of a valid proof that something doesn't exist:
1. If unicorns had existed, then there is evidence in the fossil record.
2. There is no evidence of unicorns in the fossil record.
3. Therefore, unicorns never existed.
Of course, the difficulty here is with the truth of the premises (1 and 2). In particular, it could be that we just haven't found unicorn fossils yet. Or perhaps, unicorns don't leave a fossil trace. Deductive arguments are so neat and tidy we may forget about what's been swept under the carpet: the truth (or otherwise) of the premises.

Finally Hales grasps the nettle:
Maybe people mean that no inductive argument will conclusively, indubitably prove a negative proposition beyond all shadow of a doubt. For example, suppose someone argues that we’ve scoured the world for Bigfoot, found no credible evidence of Bigfoot’s existence, and therefore there is no Bigfoot. A classic inductive argument. A Sasquatch defender can always rejoin that Bigfoot is reclusive, and might just be hiding in that next stand of trees. You can’t prove he’s not! (until the search of that tree stand comes up empty too).

And now we come to the heart of the matter:
The problem here isn’t that inductive arguments won’t give us certainty about negative claims (like the nonexistence of Bigfoot), but that inductive arguments won’t give us certainty about anything at all, positive or negative. All observed swans are white, therefore all swans are white looked like a pretty good inductive argument until black swans were discovered in Australia.
Well, hold on just a moment. We were talking about "a specific sort of negative claim, namely that a thing does not exist". And the swan argument hasn't been written that way. If we do write it that way, we get the inductive argument no observed swans are black, therefore all swans are non-black. So non-existence claims based on observation are uncertain.

But what about existence claims based on observation? Well, you only have to see one black swan too conclude that not all swans are white, and this inference is certain because it's deductive. (This is, of course, provided that we can trust that what we've seen really is a swan, and it really is black, and that we didn't just imagine the whole thing. There are some important issues here, but taking this too far can lead to radical skepticism, which is unproductive.)

My point is that when it comes to using observational evidence to argue for existence (a positive claim) or non-existence (a negative claim), you can't prove a negative, whereas you can prove a positive. (Here I'm using "prove" to mean "establish with certainty".) So, in this sense, I disagree with Hales. And I think that this is what people typically mean when they state that "you can't prove a negative". I also think that the imbalance in the difficulty of demonstrating non-existence compared to existence is a strong argument that the burden of proof should be on those who claim the existence of something.

I agree with Hales, however, in his defense of induction:
The very nature of an inductive argument is to make a conclusion probable, but not certain, given the truth of the premises. That's just what an inductive argument is. We’d better not dismiss induction because we’re not getting certainty out of it, though.
I believe we all crave certainty, but it's in pretty short supply—caveat emptor.

If we weren't so terrified of uncertainty, we might make much better decisions. When it comes to things that can be quantified, the field of statistics offers some very useful tools for dealing with uncertainty. Suppose, for example, we're trying to determine whether all swans are white. If we sample, at random, 100 swans, and each of them is white, then a very useful approximation, the "Rule of Three" tells us that we can have 95% confidence that the true proportion of non-white swans is less than 3/100 or 3%. Suppose we continue sampling swans and they stubbornly continue to be white. Having sampled 10,000 white swans, we can now have 95% confidence that the true proportion is less than 3/10,000 or 0.03%.

The notion of "95% confidence" can be made precise (but I won't get into the details here). It's also noteworthy that there are Bayesian analogues to the Rule of Three. Details are in Jovanovic and Levy, A Look at the Rule of Three, 1997, The American Statistican, 51: 137-139.

Unfortunately, there's a major difficulty in the application of the Rule of Three to the swan example: the assumption that the swans are randomly sampled! It turns out that the black swans were hiding out in Australia. But there's a message here: non-random samples can give very misleading information. That's one reason why anecdotal evidence is treated so skeptically by scientists.

For an atheist perspective on the "you can't prove a negative" idea, see here. And here's a page on burden of proof relating to claims of existence, from philosopher Philip Pecorino.

Update 12Dec2007: I sent a link to this post to Professor Hales and he kindly replied:
You write that you only have to see one black swan to know that not all swans are white, and that “this inference is certain because it is deductive.” But wait—the argument I gave about unicorns was also deductive, and you dismissed that as proving its conclusion. Therefore you can’t hold that the conclusion of your swan argument is certain because the argument form is deductive. If the conclusion of the swan argument is certain, then it is for some other reason. I suspect that you think it is certain because you are convinced of your premise that we have seen black swans. Of course, I’m rather convinced of my premises that if unicorns had existed, then there is evidence in the fossil record, and that there is no evidence of unicorns in the fossil record. Before you rejoin that we could find out that we are mistaken about the fossil record (as we would discover if we locate a unicorn skeleton), let me point out that we could also be mistaken about observing black swans. Maybe upon further study we’ll find out that they aren’t swans at all, but are merely related to swans. Or we could discover that they were phony, dyed white swans prepared to fool naïve naturalists. Or we might show that other even more skeptical hypotheses are true (mass hallucinations, dreaming, etc.). The real problem, as I see it, is your equation of proof with certainty. Most epistemologists don’t think we are certain of anything outside of logic, mathematics, and other things known a priori. There is always the possibility of error. But that doesn’t mean that we can’t prove things in some reasonable, real-world sense of prove.”

Labels: , , , ,

Bookmark and Share