Decoding Reality: The Universe as Quantum Information, by Vlatko Vedral, 2010, Oxford University Press, 229 pages
In which Oxford professor Vlatko Vedral proves that you can explain anything if you are vague enough. And he chuckles about the people who take their vague understandings too far, like believing they can teleport their socks to the dryer, but methinks he is contributing to the problem.
Vedral opens with a big promise: “surely the most exciting and fundamental question of all has to be: why is there a reality at all and where does it come from? … I will argue in this book that the notion of ‘information’ gives us the answer to both questions.”
Now, that would be a book worth reading.
But this book?
In Part One, Vedral defines “information” as a function of probability: the more surprised you are by something, the more informative it is. If Bob thinks it’s very likely his girlfriend will say A, then when she says B “He would be very surprised, given that he attributed a low probability to it … and thus this message carries more information.”
Though I have lots of questions, I’ve made a tentative peace with this assertion (just enough to keep reading), but no thanks to Vedral. He sets it out there as a given and moves cheerily on.
He’s very cheery, this man. “If we faithfully teleported every atom in your body, would that necessarily be the same as teleporting you? In other words, is your body all there is to you? The answer to this is that we really have no idea!” Fine, cheery is nice, but where’s my answer to why reality exists?
It’s not in Part One, which consists of a chapter on how genes are information, and a chapter on how social networks are information, and a chapter on how card games are all about information and how the house rules of casinos mirror the laws of thermodynamics (the house always wins = The First Law of Thermodynamics, you can’t get something for nothing), and a chapter on how global warming is inevitable because information means heat.
Did I mention that there’s really no further exploration of that oh-so-intriguing definition of “information”?
How about Part Two, which gets down to physics? Sadly, no. Vedral breezes over the crucial parts of things. He spends a few pages describing a particular experiment that’s been done with light beams, where the results always come out in a certain perplexing, hard-to-understand way. It’s interesting. But just as we’ve gotten hooked and can’t wait to find out where the mystery leads, here’s the payoff: “Strangely the only conclusion that we can reach is that [the light beam] must actually be reflected and pass through at the same time.” I’d like very much to know why that is the only conclusion we can reach, but Vedral isn’t telling. (I’m fairly confident other books at least give it a stab.) A few paragraphs later, where he ought to be elaborating, he just repeats: “Ultimately, this experiment can only be explained if we consider that the photon does not fully reflect or pass through, but actually does both at the same time. This is simply the only explanation that allows for the result that we observe.” Trust me, darlings.
Elsewhere he writes: “This quantum indeterminism … can be used to understand how electrical current can flow without any resistance whatsoever in a superconductor, how neutron stars manage to overcome gravity, how big spiders manage to climb vertical walls, and why you don’t fall through your floor given that there is so much empty space between the atoms in your floor. The answers to all these puzzling questions lie in an understanding of quantum theory.” Answers which you will not find here. He moves on and never mentions them again. So what was that about people over-generalizing with quantum physics?
The frustration culminates in the penultimate chapter when Vedral considers how much information the human head can hold.
He says the information capacity of an object depends on its mass and area. He doesn’t explain this well, or define what sort of “information” this refers to, and I’ve no idea how this relates to the probability of the human head. Based on size, he calculates that a typical human head can hold 1044 bits of information.
Then he says “Compare this to the best current computers which are still only of the order of 10 to the power of 14 bits of information. We therefore need 1030 to get the equivalent information carrying ability of a human head!”
There’s no discussion whatsoever of how this might (or not) relate to the mass and area of the computer, and the probability of … what, the computer’s existence? How would any of this map on to the discussion of Bob’s girlfriend saying something surprising versus unsurprising? If the computer is painted in purple stripes and I was expecting solid gray, does it carry more information? Until someone looks at it who was expecting it to be purple striped? Then what? What if we’re both looking at it at the same time? Then what?
These may be layperson-type questions, but then this is a layperson-type book, not an advanced text, so they deserve addressing.
Big answers promised, but not delivered. D+ — Lisa Parsons