Qubit Parity

One of the things I see mentioned in quantum computing articles is that qubits are prone to interference from outside sources — hence why they have to be kept cool, and why companies like Google want to understand how errors scale with the number of qubits. Recently Intel announced a 17 qubit quantum chip they delivered, with this interesting comment:

IBM’s latest chip also has 17 qubits. Having that number means QuTech will be able to run certain error correction protocols on the chip, Held said.

Since Microsoft is exploring topological qubits that they claim are more stable (and every company seems to be taking different approaches), it makes me wonder if there is “qubit parity”? And I think of parity in two senses:

  • Can we compare qubits from Intel against qubits from IBM, in terms of processing power or potential? If Intel has a 17 qubit chip, and IBM has a 17 qubit chip, would they be capable of performing the same calculations? Or because they use different types of quantum theory, would one be more / less “powerful” than the other? Or perhaps like GPUs and even more specialized chips, some types of quantum chips might be better suited to certain tasks than others?
  • Do 17 qubits really behave like 17 qubits, given that some are used for error correction, or would they behave more like, 14 error-free qubits? Basically, I wonder how much computational power you might lose to the error correction aspect, regardless of technical approach. Is it constant across technical approaches, say 10%?

It will be interesting to see how these factors play out in the market, as more chips become fabricated and used in various applications.