You can't beat the Heisenberg limit, but, with enough math, you can come close.
You can't beat the Heisenberg limit, but, with enough math, you can come close.
Focus Features

reader comments

with 24 posters participating

Quantum computing is all about controlling quantum states. Lately, news has been coming out about quantum computers computing stuff, with the underlying ability to control things taken for granted. But the truth is that control is still a limiting factor in the development of quantum computers.

At the heart of the matter is the qubit, a quantum object that is used to encode information. Part of the power of a quantum computer is that a qubit can be put into a superposition state—more on that below—that allows a kind of parallelism. The aim of a quantum algorithm is to manipulate the qubit's superposition state so that when we measure the qubit, it returns a bit value that corresponds to the right answer.

And that means controlling the superposition state, which involves quite a bit of high-precision (and high-price) equipment. Improvements usually involve even more expensive equipment. But new research suggests that we might be able to improve our control by a factor of 1,000 using existing equipment and clever thinking.

The author both should and should not have written a long aside about superposition

To understand the control problem, we need to have a bit of understanding about superposition. When we describe a quantum superposition state, we often use a shortcut and say something like "this means the particle is in two positions at once."

But that really doesn't cut it for our purposes, and I think it is misleading anyway. A quantum object has a number of properties that we can measure. Until a property, like position, is measured, it has no value. Instead, we have to think about probabilities: if we were to perform a measurement, what is the probability that we would obtain a certain value?

That's the surface. Underneath the surface is a highly unusual concept called a "probability amplitude." A probability is always positive (or zero) and real, but an amplitude can be positive, negative, or even complex (if you don't know what a complex number is, don't worry). This changes everything.

Let's imagine that we have a single particle, and we fire it at a screen with two holes. The particle may pass through either hole or hit the screen. On the other side of the screen, we place a detector and ask ourselves, "what is the probability that we will detect a particle?"

Well, to obtain that, we have to add up the probability amplitudes of each path that the particle can take to the detector. And amplitudes can be positive or negative, so the sum is not necessarily bigger. It can even be zero.

If we perform this calculation for many different possible detector locations, we find many places where the probability is absolutely zero and many places that are equally likely. If you perform this experiment, this is exactly what you measure. After a thousand individual particles pass through the holes, there are some places where they are never detected and others where they are detected regularly.

Where am I going with all of this? In quantum mechanics, to accurately predict these results, you need to know all the possible paths by which a particle may reach a certain position. So, in our example above, we need to take into account both paths to the detector. This leads people to say that the particle passes through both holes at once.

But, the addition of the probability amplitudes determines where a particle may be detected and where it will never be detected. So, if you modify one of the paths that the particle may take, it means you modify the amplitudes and thus shift the location where the particle may be found.

Using superposition

So, the probability of measuring a value depends on the history of the probability wave. This encompasses all possible paths. And that can be turned into an excellent sensor. Indeed, we use this property to measure the passage of time with exquisite sensitivity. It also works well for measuring other properties.

A common example is sensing magnetic fields. Something like an electron is also a tiny magnet. The electron's magnet will either align with the magnetic field or anti-align. So, we can put the electron in a superposition state of aligned and anti-aligned. The effect of the magnetic field is to modify the probability amplitudes of the two states, while the size of the change depends on the strength of the magnetic field.

After passage through the magnetic field, we measure the orientation of the electron's magnet. An individual measurement tells us nothing, but after a thousand electrons, we have the relative probabilities of the two orientations. From that, we can calculate the magnetic field strength.

This can, in principle, be a highly accurate sensor. Only one thing gets in the way: noise. The value of the probability amplitudes depends on the path that they take (though not necessarily the distance they travel). That path is changed by the local environment in unpredictable ways, so each electron is actually a measurement of the influence of the magnetic field we want to measure plus a random contribution from noise. The latter is different for each electron. If the noise is large enough, it all evens out, such that the two measurement results (aligned and anti-aligned) have the same probability.

The noise cannot be reduced. So, to get a good measurement, we have to make our electron less sensitive to random fluctuations and more sensitive to the signal we are interested in.

Getting sensitive

In the case of measuring time-dependent signals, the way to do this is to repeatedly thump the electron very hard. In the absence of any thumping, or any noise, the electron's probability wave changes smoothly with time. Noise adds little jumps to these changes. It looks a bit like the wave jumped forward (or backward) in time without you noticing.

But we don't want little jumps, because those get in the way of the signal. Instead, we want to hit the electron with a quantum baseball bat, which creates a jump big enough to swap the probability amplitudes of the two possible outcomes (this is called a "pi-pulse"). When you do this at regular intervals, the effect is to undo all the noise-driven changes that accumulate during the interval.

So, if there is no signal and only noise, you measure no net change in probabilities. But if the magnetic field is oscillating at a constant frequency (or more precisely, driving the qubit at that frequency), the changes in probability amplitude will accumulate.

This only works if the signals vary at the same period as the interval between thumps we're giving the system. Essentially, we have a very narrow filter (those of you who play with electronics may recognize the description of a lock-in amplifier hidden in here).

Although the filter is narrow enough to be useful, it can't be shifted smoothly in frequency, so we can't scan across frequencies. The big problem is technology. Our quantum baseball bat is often a microwave pulse. Those pulses have to be generated by something, and a good signal generator might update its output every nanosecond. That means that you can only change the interval between pulses (and the length of each pulse) by increments of one nanosecond.

Imagine that you want to measure the frequency and amplitude of a varying magnetic field. You know that the magnetic field varies at a frequency of about 5 MHz (that means that in 100ns, the field goes from fully positive to fully negative). But you don't know the frequency exactly. To find the magnetic field, you step your pulse interval over the time to cover the entire range of interest. You find... nothing. Why? Because the magnetic field was varying at a frequency that lay in between the smallest steps you could take.

This same problem applies to the control of qubits. In a device with multiple qubits, each is a bit different and has to be controlled with a slightly different set of microwave pulses. The resolution of our instruments does not allow for this to be optimized very well.

The way to get around this, it turns out, is to treat the electron a little nicer. Instead of repeatedly applying a baseball bat, we apply a smooth push to the electron. This smooth microwave pulse has the interesting effect of increasing the temporal resolution of the pulses. And, as a result, we get higher frequency resolution (and better qubit control).

Rounding the corners of the square

In an on-off pulse, the amplitude of the pulse generator has only one of two values. In a pulse that smoothly increases and decreases, you can use the full scale of the amplitude range of the generator to change the center location of each pulse by an amount that is much smaller than a single nanosecond. Essentially, nature figures out the center of the pulse by interpolation, even if the pulse generator never actually puts out the center value.

The result is that a pulse generator with a 14-bit digital-to-analog converter and a temporal resolution of 1 nanosecond can change the timing between the centers of pulses by just a picosecond or so. That's an improvement of a factor of a thousand.

The researchers showed that it worked by performing spectroscopy on magnetic fields applied to superconducting loops of current. They then applied the same technique to measure the nuclear magnetic resonance frequency of a single carbon atom (the heavier isotope: 13C) in a diamond. In both cases, they were able to measure at a much higher resolution than they should have been able to, given their equipment.

Isn’t nature weird?

The achievement here is pretty awesome. Basically, the researchers have taken a bit of equipment that most labs have, but they used it in a slightly different way. The result is something that you should only be able to get with pulse generators of the future.

But, even though I get the results and I understand the argument, I still don't really understand how this works. Nature doesn't interpolate like we do—at least, I don't think it does. The electron, or whatever quantum object you choose, sees the pulse as it actually is: a set of discrete voltages that increase and decrease in fixed steps at fixed time intervals. The center of the pulse is not magically discerned by tracing an imaginary line between fixed points.

I guess what really matters is something called the "pulse area" (the integral of the pulse, so literally the area under the curve). The center of the pulse can then be defined as the time at which the integral reaches half the total. For a pulse in which the amplitude varies smoothly, small changes to the shape of the pulse can vary where this halfway point is reached in a controlled manner.

But I'm not convinced that this is the case either. The key is area, and, for a square pulse, the area can still be varied continuously, even if the time-steps are quite coarse. You simply need to change the amplitude of the on-value of the square pulse.

This technique, though, is going to be a boon for many. Those working in quantum computing like to be able to control their superposition states, and that requires using exactly this technique. And now, they should be able to control their quantum states with even higher precision, which means that the stored quantum information will last longer, and more computations can be performed. In that respect, this represents a solid step forward.

And one day, I might understand why it works better than I think it should.

Physical Review Letters, 2017, DOI: 10.1103/PhysRevLett.119.260501

Click here to open external link

New form of qubit control may yield longer computation times