Quantum computing and error correction, a fundamental milestone reached

Physical quantum bits, or qubits, are vulnerable to errors. These errors arise from various sources, including quantum decoherence, crosstalk, and imperfect calibration. Fortunately, quantum error correction theory provides the ability to calculate by synchronously protecting quantum data from such errors. "Two capabilities will distinguish an error-corrected quantum computer from current noisy intermediate-scale quantum processors (NISQs)," says QuTech professor Leonardo Di Carlo. “First, it will process quantum information encoded in logical qubits rather than physical qubits (each logical qubit made up of many physical qubits). Second, it will use quantum parity checks interlaced with computational steps to identify and correct errors that occur in physical qubits, safeguarding the encoded information as it is processed.

According to the theory, the logic error rate can be exponentially suppressed provided that the incidence of physical errors is below a threshold and that the circuits for logic operations and stabilization are fault tolerant. The basic idea is therefore that if the redundancy is increased and more and more qubits are used to encode the data, the net error decreases. Researchers from TU Delft, together with colleagues from TNO, have now taken an important step towards this goal, by making a logical qubit made up of seven physical qubits. Di Carlo stressed the multidisciplinary nature of the work: “This is a combined effort of experimental physics, theoretical physics by Barbara Terhal's group and also electronics developed with TNO and external collaborators. The project is mainly funded by IARPA and Intel Corporation. " “Our big goal is to show that as we increase coding redundancy, the net error rate decreases exponentially,” Di Carlo concluded. "Our current focus is on 17 physical qubits and the next will be 49. All layers of the architecture of our quantum computer have been designed to allow for this scaling."