# Physics

How big a memory capacity could you build?

Suppose a team of particle physicist figure out a way to get a bunch of protons in an accelerator up to a energy density of 0.1 or 0.2 of the Planck density. Then observed the coulomb explosion of this bundle of particles. Less than a second after the big bang the whole universe would have been a soup of elementary particles at those kind of densities. What would happen. I don't know but it might be interesting to find out.

Unfortunately, I could find no update including the new Higgs search results yet. I guess such a fit will be ready in a few weeks... But the new released information is already interesting enough that we may meaningfully spend a few words around some figures here.

Werner Heisenberg's 'Uncertainty Principle'(1927) is a fundamental concept in quantum physics, basically saying you can be increasingly accurate in position or momentum (mass X velocity), but not both(1).

This can be an important feature rather than a defect in something like quantum cryptography, where information is transmitted in the form of quantum states such as the polarization of particles of light.

A group of scientists from LMU and the ETH in Zurich say they have shown that position and momentum *can* be predicted more precisely than Heisenberg's Uncertainty Principle states - if the recipient makes use of a quantum memory that employs ions or atoms.

If you do not like the figure below, courtesy CMS Collaboration 2010, you are kindly requested to leave this blog and spend your time reading something else than fundamental physics. I do not know what will ever make you believe particle physics is beautiful, if not what is shown here.

Dynamic quantum logic

# Abstract

Here I wish to assemble some of the electroweak physics results produced by CMS in time for ICHEP. The CMS experiment has shown results that use up to 280 inverse nanobarns of proton-proton collisions, but for electroweak measurements -those involving W and Z signals, to be clear- the statistics used is up to 200 inverse nanobarns of well-understood data.

I thought that I knew what a unitary transform is, until I started thinking about it.

(2^n-ons are hypercomplex numbers that are related via the 2^n-on construction. Including n=3 the 2^n-on construction gives the same numbers as the Cayley-Dickson construction. From there the 2^n-ons are "nicer".)

I know the following: