Qubits are not Bits!

I had a revelation in the last couple of days — qubits are not bits. Shocking right? Let me explain.

In traditional computing, everything you see is a 0 or a 1, because everything is eventually compiled down to a bit representation. Groups of bits are used to represent instructions and information. When a programmer types in something like result = 1 + 2, all of that is converted into machine language — 0’s and 1’s. The set of bits that represent instructions are well-defined for each chip, like for your mass-market Intel x86 processors. The values (bits) used in each operation are then manipulated and stored in memory, to be used later.

In quantum computing, people talk about how qubits are 0, 1, or both at the same time, and I’ve had this misconception that qubits just run the exact same instructions with the same set of information as traditional computing — except that all the instructions run simultaneously. But that’s not the case (as this article from Chemical & Engineering News explains very well). Qubits represent the state of things, but not instructions or information. When you program with quantum computing, you set the qubits into a specific, defined state, and let them naturally (i.e. magically) settle into a final, resting state. You don’t program in explicit instructions on how that initial state turns into the final state, you just set up the right constraints on the system (how various qubits are tied together, for example, or what gates are used to influence the qubit). So when people talk about programming quantum computers, the program instructions exist outside of the “quantum” part. Like this paper from Oak Ridge National Laboratories explains, a quantum architecture involves both traditional computing and what they call “quantum accelerators”.

A highly simplified comparison might be when you fill up your car with gas, you fill it up until the nozzle disengages, because it senses that the tank is full (quantum programming — the state speaks for itself). You don’t think to yourself, I need to fill it up with 5.182 gallons of gas to get it full (traditional programming — you specify everything).

Improved Understanding of Qubits (Maybe)

One of the comments on my previous post about the physical representation of qubits prompted me to dig a little deeper into existing resources, to see if I could better understand how quantum computers reach low-level states naturally, and how that affects the programming aspect. While doing research on that, I think I’ve clarified a couple of things for myself.

First, I found this great D-Wave whitepaper on how you would approach programming a quantum computer to solve a map coloring problem, compared to traditional computing. As they say, instead of explicitly programming the steps of the algorithms, you instead program in what I think of as the “error function” — the thing you want minimized. I find quite a lot of similarities here to machine learning, and I can see why people say that quantum computing could really help solve machine learning problems. Like in many machine learning algorithms, you try to minimize some error function (i.e. root-mean-squared error on a test data set). Even Figure 3 of a quantum cell (page 6) looks kind of like a neural network, especially if you squint. But one thing that I didn’t understand very well was the use of qubits in the cells. People talk about ~50 qubits as being able to perform things traditional computing cannot. Yet according to this whitepaper, if you extend the problem to all 13 Canadian states, solving this relatively straightforward problem would require 104 qubits (13 provinces * 1 cell per province * 8 qubits per cell), which seems excessive…so not quite clear if that is because this is a simplified example or dependent on the D-Wave architecture.

Second, I came across this great 2013 article from Scientific American that addresses some of the questions I brought up in my previous post about quantum parity. Apparently for at least two types of quantum systems (superconducting circuits and trapped ions), they seem to solve simple calculations similarly, though there are differences in speed and error propagation. With Microsoft introducing topological qubits, it will be interesting to see if it performs similarly to the other two technologies. One new term that I learned from the article is “quantum volume”, which encapsulates the idea that the published number of qubits does not reflect the actual processing power (what I referred to as quantum parity)! So increasing quantum volume will actually help solve more complex problems, instead of just the “raw” number of qubits.

So overall I think +1.5 for understanding this week.

Qubit Parity

One of the things I see mentioned in quantum computing articles is that qubits are prone to interference from outside sources — hence why they have to be kept cool, and why companies like Google want to understand how errors scale with the number of qubits. Recently Intel announced a 17 qubit quantum chip they delivered, with this interesting comment:

IBM’s latest chip also has 17 qubits. Having that number means QuTech will be able to run certain error correction protocols on the chip, Held said.

Since Microsoft is exploring topological qubits that they claim are more stable (and every company seems to be taking different approaches), it makes me wonder if there is “qubit parity”? And I think of parity in two senses:

  • Can we compare qubits from Intel against qubits from IBM, in terms of processing power or potential? If Intel has a 17 qubit chip, and IBM has a 17 qubit chip, would they be capable of performing the same calculations? Or because they use different types of quantum theory, would one be more / less “powerful” than the other? Or perhaps like GPUs and even more specialized chips, some types of quantum chips might be better suited to certain tasks than others?
  • Do 17 qubits really behave like 17 qubits, given that some are used for error correction, or would they behave more like, 14 error-free qubits? Basically, I wonder how much computational power you might lose to the error correction aspect, regardless of technical approach. Is it constant across technical approaches, say 10%?

It will be interesting to see how these factors play out in the market, as more chips become fabricated and used in various applications.

The Physical Representation of Qubits

As I’ve been reading more about quantum computing, I’ve been wondering what the physical manifestation of a quantum computer would be. I know what modern-day silicon chips look like, but I have a hard time imagining what a quantum computer would look like. So let’s start at the smallest component — a qubit.

As Wikipedia so helpfully describes, there are many representations of qubits. My simple take-away is that all of them involve very small things — i.e. a single atom, an electron, or a photon. But, these very small things require 1) supercooling (to reduce the noisy state of each qubit), and 2) lasers (to make measurements), and so while each individual element is small, the entire aparatus is large and very complex. Kind of like vacuum tubes and mainframes, in my imagination. As the technology advances, I wonder if it’s even possible for quantum technology to shrink like modern day transistors, since the atomic level is just so different, or if quantum computing will have to be made available through the cloud, only?

I’ve also noticed that different teams are experimenting with different types of qubits and quantum computing: Google and UCSB with superconducting qubits, Microsoft with topological qubits, etc. So it’s not yet obvious that there is a dominant technology or approach in the field.

After skimming the paper from Google and UCSB, I’m still unclear how qubits in general translate to computing work. It seems like after you measure the state multiple times, you get out a probability distribution that the qubit is in any given superposition (extend this to multiple qubits). So while a qubit can be in all of its superpositions at the same time (in real life), as soon as you sample it, you’ve digitized it, or effectively equated the qubit to a normal bit. And therefore, similar to how sampling analog music to create a digital representation loses data, I would assume that sampling qubits to figure out their state must also lose some (valuable?) data…given all the hype, I’m probably missing some key understandings about the field, so I definitely plan to read some more papers and articles. And maybe that is why error correction is so critical in quantum computing?