Brendan wrote:Hi,
Owen wrote:Brendan wrote:If you think "one bit per particle in the observable universe" is a limit; then alternatives include:
- find ways to observe more of the universe
Relativity says no to that one (Also: If you can observe more than the observable universe, then you have implied the ability for time travel, and I don't even want to think of the complications that implies)
From
wikipedia:
"
Some parts of the universe may simply be too far away for the light emitted from there at any moment since the Big Bang to have had enough time to reach Earth at present, so these portions of the universe would currently lie outside the observable universe. In the future, light from distant galaxies will have had more time to travel, so some regions not currently observable will become observable."
Basically, the only thing we need to do to observe more of the universe is wait (do nothing for long enough).
"However, due to Hubble's law regions sufficiently distant from us are expanding away from us much faster than the speed of light (special relativity prevents nearby objects in the same local region from moving faster than the speed of light with respect to each other, but there is no such constraint for distant objects when the space between them is expanding; see uses of the proper distance for a discussion), and the expansion rate appears to be accelerating due to dark energy"
Brendan wrote:Owen wrote:Brendan wrote:- store more than 1 bit of information per atom (e.g. let's say a single atom has a speed ranging from 1 to (2**32+1) in one of 2**32 directions - you'd be able to store 64 bits of information per atom that way).
Heisenberg's Uncertainty Principle says no to that one
Heisenberg's Uncertainty Principle says "the more precisely the position of some particle is determined, the less precisely its momentum can be known, and vice versa". This means we can know the momentum extremely precisely if we don't care what the position is (and in my silly example, there wasn't any need to care what the position was anyway).
We could also do the reverse - rely on the position without caring what the momentum is. For example, lets have 2**64 empty spaces and one atom. Whichever space the atom is in determines the value that atom represents.
If you know the position, you don't know the momentum - which means
it is moving in a direction you do not know!
Brendan wrote:Owen wrote:Brendan wrote:- simply use sub-atomic particles instead (e.g. maybe use photons, because they're unlimited - you can create more photons in your spare time if you ever run out)
Only in the presence of infinite energy.
So now you're saying the real limit is the amount of energy in the universe, and not the number of atoms?
Owen wrote:The point of the number of atoms in the universe as an absolute maximal upper bound is that it is far beyond impossible - the number of bits you can store per atom is finite, but more importantly you need to arrange them in some sort of rigid lattice in order to avoid them wandering off (because if they're moving you will never find them again - see the uncertainty principle) and a single atom thick sheet of carbon (pretty much an optimum as far as single atom thick sheets go) does not have the rigidity you would need in the face of manipulating the atoms to store data.
Even if you accept that as a correct upper limit (which is highly dubious), it's still wrong (we'll still reach a point where we want to store more than we possible can).
And I want infinite free energy. That isn't going to happen.
Brendan wrote:Owen wrote:We are using the same kind of principle here which says you can't brute force a 128-bit symmetric encryption key (In this case because it requires more energy than exists in the solar system)
Which is also wrong - if you're extremely lucky, you might guess the key on your very first attempt and consume a very negligible amount of energy.
For
specific keys, yes, you might successfully brute force them.
Of course, if your random number generator spat out the key "0x00000000000000000000000000000000", then I'd probably question why it hadn't failed its' internal self test.
Brendan wrote:Of course the energy you consume isn't destroyed either (conservation of energy). You could provide all the energy you need using a lemon and 2 nails (e.g. copper and zinc), as long as you're able to recycle "waste energy" back into a usable form.
Brendan
There are actually scientific principles which define the minimum energy required to perform computation (That is, the minimum energy required to do a trivial binary operation on two bits). The energy consumed is turned into heat (At 100% efficiency), and therefore the amount of energy you can recover from that is dependent upon the amount of cold you can find (per the laws of thermodynamics), where "cold" is defined as something colder than your heat source, in this case your supercomputer.
Given that you know the temperatures of your heat source and heat sink, you can calculate the maximum power this engine can produce (The Carnot Engine is an idealised heat engine which defines the maximum amount of energy you can extract from a thermal gradient). Of course, that is going to decrease over time (as you heat up your heat sump).
Being that 99.9% of the mass of the solar system is the sun and therefore you only have 0.1% to exploit as cold (Requiring a rather warm supercomputer if you wish to use, say, Mercury for this - with resulting decrease in computational efficiency), we can determine the amount of energy you can recover in this manner.
Suffice to say that it is negligible.
P.S. the calculations which show that the solar system contains insufficient energy to brute force a 128-bit key assume, for purposes of simplification, that the supercomputer is running at 0K (Impossible) and that it has an infinite heat sink at a temperature of 0K (Also impossible)