Atom Computing wins CTA's Emerging Tech Company of the Year Award

Novel Solutions for Continuously Loading Large Atomic Arrays

Researchers at Atom Computing have invented a way to keep the atomic array at the heart of the company’s quantum computing technology continuously populated with qubits.

In a preprint article on arXiv, the Atom team describes its approach both to assembling a 1,200-plus qubit array and for overcoming atom loss, a major technical challenge for quantum computers that use neutral atoms as qubits.

Dr. Ben Bloom, Founder and Chief Technology Officer, said the advancements ensure that as Atom Computing scales to larger numbers of qubits, its technologies can efficiently perform mid-circuit measurement, which is necessary for quantum error correction, and other operations.

“All quantum computing technologies need to demonstrate the ability to scale with high-fidelity qubits,” he said. “Neutral atom systems are no exception.”

Technical challenges

Atom loss occurs for numerous reasons.  Qubits can be knocked out of place by stray atoms in a vacuum chamber or occasionally disrupted during “read out” when they are imaged to check their quantum state.   

All quantum computers that use individual atoms as qubits  (such as trapped ion or neutral atom systems) experience atom loss.  But the problem is particularly acute for neutral atom quantum computing technologies.

With neutral atom technologies, developers often assemble arrays (two-dimensional grids) with extra qubits to act as a buffer.  The system still experiences loss but has enough qubits to run calculations.  

Another approach involves slowing down the read out to reduce the number of qubits lost during the process.  But this method has the disadvantage of slowing down operations and is less efficient.  

It also is difficult to replace individual qubits within an array. The conventional practice is to throw out the entire array and replace it with a new one, which becomes unwieldy and time consuming as systems scale.

“Error correction will require quantum information to survive long past the lifetime of a single atom, and we need to find effective strategies to reach and stay at very large qubit numbers,” Bloom said.

Atom Computing’s approach

To overcome these challenges, Atom researchers have engineered novel solutions into the company’s next-generation quantum computing systems, which will be commercially available within the next year. 

Outlined in the arXiv paper, Atom Computing has developed a method to continuously load ytterbium atoms into the computation zone and store them until needed.  Individual atoms are moved as needed into the array to replace missing qubits using optical tweezers (concentrated beams of light).

In addition, our researchers have designed and installed  an optical cavity into our quantum computers that creates large numbers of deep energy wells to hold qubits tightly in place when imaged. These deeper wells help protect qubits, reducing the number that are lost during read out.

These innovations have enabled the Atom Computing team to demonstrate that we can consistently fully load a large atomic array with more than a thousand qubits.

“We’ve known from the beginning that we needed to overcome these challenges if our technology is to successfully scale, operate efficiently, and achieve our goal of fault-tolerant quantum computing,” Bloom said. “It’s exciting to be able to showcase the solutions we have been developing for the past several years.”

Leveraging Atom's technology to improve constrained optimization algorithms

Jonathan King, Co-Founder and Chief Scientist

Researchers at Atom Computing have formulated a new method for stabilizing optimization algorithms that could lead to more reliable results from early-stage quantum computers.

In a pre-print article posted on arXiv, the authors describe how the method, known as subspace correction, leverages key features of Atom Computing’s atomic array quantum computing technologies, notably long qubit coherence times and mid-circuit measurement.  The technique detects whether an algorithm is violating constraints – essentially going off track during computation- and then self-corrects.

Subspace correction could reduce the number of circuits developers need to run on a quantum computer to get the correct answer for optimization problems.

We sat down with Dr. Kelly Ann Pawlak and Dr. Jeffrey Epstein, Senior Quantum Applications Engineers at Atom Computing and the lead authors of the paper, to learn more.

Tell us more about this technique. What do you mean by “subspace correction?” How does it work?

Kelly: First let’s talk about quantum error correction, one of the most important topics in quantum computing. In quantum error correction, we use mid-circuit measurements to detect the state of the quantum system and identify whether errors, such as bit flips, have occurred during operation. If we detect an error, we can apply a correction using feed-forward capabilities and resume the calculation.

Subspace correction works the same way, except it isn’t detecting and correcting “operational errors,” from electrical noise or thermal effects. Instead, it probes information relevant to solving a computational problem. In lieu of looking for bit flip errors due to stray radiation, it can check if a solution to industrial optimization problems obeys all the feasibility constraints. It can do so in the middle of a calculation without destroying the “quantumness” of the data.

In short, subspace correction really tries to leverage methods behind quantum error correction.  It is a method of encoding redundancy that checks into a quantum computation in order to solve high-value problems, not  just the basic quantum control problems.

Can you give us background on this technique? What inspired it?

Kelly:  Atom Computing is pursuing quantum computer design practices that will enable us to achieve fault-tolerant operation as soon as possible. So for us the engineering pipeline for implementing quantum error correction, rather than general NISQ operation, is our primary focus.

We challenged ourselves to answer the question: “Given our strengths and thoughtful engineering of a platform fully committed to running QEC functionality as soon as possible, are there any problems we can try to speed up on the way to early fault tolerance?” The answer is “Yes!”

It turns out that techniques of error correction are effective algorithmic tools for many problems. As mentioned, we have looked specifically at problems that have constraints, like constrained optimization and satisfaction type problems in classical computing. You can find examples of these problems everywhere in finance, manufacturing, logistics and science. 

Jeffrey: One of the things that’s interesting about quantum error correction is that it involves interplay between classical and quantum information. By extracting partial information about the state of the quantum computer and using classical processing to determine appropriate corrections to make, you can protect the coherent, quantum information stored in the remaining degrees of freedom of the processor. The starting point for these kinds of protocols is the choice of a particular subspace of the possible states of the quantum computer, which you take to be the “meaningful” states or the ones on which we encode information.  The whole machinery of quantum error correction is then about returning to this space when physical errors take you outside of it.

Many optimization problems naturally have this structure of “good” states because the problems are defined in terms of two pieces - a constraint and a cost function. The “good” states satisfy the constraint, and the answer should be chosen to minimize the cost function within that set of states. So we would like a method to ensure that our optimization algorithm does in fact return answers that correspond to constraint-satisfying states. As in the case of quantum error correction, we’d like to maintain coherence within this subspace to take advantage of any speedup that may be available for optimization problems. A possible difference is that in some cases, it may be that the method can still work for finding good solutions even if the correction strategy returns the computer to a different point in the good subspace than where it was when an error occurred, which in the case of error correction would correspond to a logical error.

Why is this technique important? How will it help quantum algorithm developers?

Kelly: Right now, it is really difficult to ensure an algorithm obeys the constraints of a problem when run on a gate-based quantum computer. Typically, you run a calculation several times, toss out bad results, and identify a solution.  With subspace correction, you can check to make sure the calculation is obeying constraints and if it is not, correct it.  We think this approach will reduce the number of circuit executions that need to be run on early-stage quantum computers.  It will save a lot of computational time and overhead.

Were you able to simulate or test the technique on optimization problems? If so, what were the results?

Jeffrey: One of the things we show in the paper is that the state preparation protocol we describe for the independent set problem has the same statistics as a classical sampling algorithm. This characteristic makes it possible to simulate its performance on relatively large graphs, although it also directly implies that the method cannot provide a sampling speedup over classical methods. Our hope is that by combining this method with optimization techniques such as Grover search, there might be speedups for some classes of graphs. We’re planning to investigate this possibility in more detail, both theoretically and using larger scale simulations in collaboration with Lawrence Berkeley National Laboratory.

Can subspace correction be applied to other problems?

Kelly: Certainly yes. Constraints are just one kind of “subspace” we can correct. We have a lot of ideas about how to apply this method to improve quantum simulation algorithms. When running a quantum simulation algorithm, you can detect when the simulation goes off course, by breaking energy conservation laws for example, and try to fix it. We would also like to explore using this method to prepare classical data in a quantum computer.

Can this method be used with other types of quantum computers?

Kelly: It could! But it would be nearly impossible for some quantum computing architectures to run parts of this algorithm prior to full fault tolerance due to the long coherence requirements. Even in early fault tolerance, where some QC architectures are racing against the clock to do quantum error correction, it would be very difficult.

Jeffrey: Everything in the method we studied lives within the framework of universal gate-based quantum computing, so our analysis doesn’t actually depend specifically on the hardware Atom Computing or any other quantum computing company is developing - it could be implemented on any platform that supports mid-circuit measurement and feedback. But the performance will depend a lot on the speed of classical computation relative to circuit times and coherence times, and our neutral atom device with long-lived qubits gives us a clear advantage.

A demonstration of how the distribution generating primitive works on a graph using SSC
An overview of the paradigm we use for algorithm development. It has the same pattern as error correction.

Record-breaking quantum computer has more than 1000 qubits

Atom Computing is the first to announce a 1,000+ qubit quantum computer

Quantum startup Atom Computing first to exceed 1,000 qubits

Systems to be available in 2024, on path to fault-tolerant quantum computing this decade

October 24, 2023 - Boulder, CO - Atom Computing announced it has created a 1,225-site atomic array, currently populated with 1,180 qubits, in its next-generation quantum computing platform.

This is the first time a company has crossed the 1,000-qubit threshold for a universal gate-based system, planned for release next year.  It marks an industry milestone toward fault-tolerant quantum computers capable of solving large-scale problems. 

CEO Rob Hays said rapid scaling is a key benefit of Atom Computing’s unique atomic array technology.  “This order-of-magnitude leap – from 100 to 1,000-plus qubits within a generation – shows our atomic array systems are quickly gaining ground on more mature qubit modalities,” Hays said.  “Scaling to large numbers of qubits is critical for fault-tolerant quantum computing, which is why it has been our focus from the beginning. We are working closely with partners to explore near-term applications that can take advantage of these larger scale systems.”

Paul Smith-Goodson, vice president and a principal analyst at Moor Insights & Strategy, said the 1,000-plus qubit milestone makes Atom Computing a serious contender in the race to build a fault-tolerant system.

“It is highly impressive that Atom Computing, which was founded just five years ago, is going up against larger companies with more resources and holding its own,” he said. “The company has been laser focused on scaling its atomic array technology and is making rapid progress.”

Fault-tolerant quantum computers that can overcome errors during computations and deliver accurate results will require hundreds of thousands, if not millions, of physical qubits along with other key capabilities, including:

Hays said Atom Computing continues to work toward these capabilities with its next-generation system, which provides new opportunities for its partners.

Guenter Klas, leader of the Quantum Research Cluster at Vodafone said, “We welcome innovations like the neutral atom approach to building quantum computers as from Atom Computing. In the end, we want quantum algorithms to make an economic difference and open up new opportunities, and for that goal scalable hardware, high fidelity, and long coherence times are very promising ingredients.”

Tommaso Demarie, CEO of Entropica Labs, a strategic partner of Atom Computing, said, “Developing a 1,000-plus qubit quantum technology marks an exceptional achievement for the Atom Computing team and the entire industry. With expanded computational capabilities, we can now delve deeper into the intricate realm of error correction schemes, designing and implementing strategies that pave the way for more reliable and scalable quantum computing systems. Entropica is enthusiastic about collaborating with Atom Computing as we create software that takes full advantage of their large-scale quantum computers."

Atom Computing is working with enterprise, academic, and government users today to develop applications and reserve time on the systems, which will be available in 2024.

To learn more about Atom Computing visit: https://atom-computing.com.

###

About Atom Computing

Atom Computing is building scalable quantum computers with arrays of optically trapped neutral atoms. We collaborate with researchers, organizations, governments, and companies to help develop quantum-enabled tools and solutions for the growing global ecosystem. Learn more at atom-computing.com, and follow us on LinkedIn and Twitter.

From semiconductors to quantum computing: What the US can learn from past oversight

Atom Computing: How the US can Win the Quantum Race

Atom Computing adds key leaders to accelerate quantum computing momentum with the U.S. government

August 30, 2023 — Berkeley, CA – Atom Computing announced it has appointed Ken Braithwaite, former Secretary of the Navy, to its Board of Directors and that Greg Muhlner has joined the company as Vice President of Public Sector to lead engagement with the U.S. government.

CEO Rob Hays said the addition of Braithwaite and Muhlner reflects the important role of the U.S. government in the advancement and adoption of quantum computing, noting Atom Computing’s collaborations with the U.S. Department of Defense, U.S. Department of Energy, and the National Science Foundation.

“The United States has a vibrant quantum ecosystem thanks, in part, to investments the federal government has made in quantum computing research and development, workforce initiatives, and procurement,” he said.  “Public-private partnership with our company will help to advance the technology and ensure U.S. leadership in this area of strategic importance. Ken (Braithwaite) and Greg (Muhlner) have extensive federal government experience that will help position Atom Computing as the premier partner to the U.S. in winning the race to large-scale, fault-tolerant quantum computing.”

Braithwaite was sworn in as the Secretary of the Navy in 2020 and previously served as a U.S. ambassador to Norway. He graduated from the U.S. Naval Academy in 1984 and was commissioned as an ensign in the U.S. Navy.  Braithwaite left active duty in 1993 but continued his service in the Navy Reserve while holding several executive leadership positions in private industry.

Muhlner has 15 years of experience in business development and sales to the federal government. Before joining Atom Computing, he was Vice President of Sales for Rebellion Defense and led Navy and U.S. Marine Corps sales at Amazon Web Services.  Muhlner served as a Naval Special Warfare (SEAL) Officer, participating in Operation Enduring Freedom in Afghanistan and Operation Iraqi Freedom. 

“Quantum computing is a disruptive technology that will redefine computing and the complexity of problems that we can solve,” Braithwaite said. “For our national security and to fuel our economy, it is imperative the United States and its allies win the quantum computing race.  I am proud to serve on Atom Computing’s board of directors to help the company achieve its mission in this critical new domain.”

About Atom Computing

Atom Computing is building scalable quantum computers with atomic arrays of optically trapped neutral atoms. We collaborate with researchers, organizations, governments, and companies to develop world-changing tools and solutions; and support the growing global ecosystem. Learn more at atom-computing.com and follow us on LinkedIn.

Atom Computing and National Renewable Energy Laboratory exploring electric grid optimization using quantum computing

July 20, 2023 — Boulder, C0 – Atom Computing and the U.S. Department of Energy’s National Renewable Energy Laboratory (NREL) today announced a collaboration to explore how quantum computing can help optimize electric grid operations.

During this week’s IEEE Power and Energy Society general meeting, NREL researchers demonstrated how they incorporated Atom Computing’s atomic array quantum computing technologies into the lab’s Advanced Research on Integrated Energy Systems (ARIES) research platform and its hardware-in-the-loop testing to create a first-of-a-kind “quantum-in-the-loop” capability that can run certain types of optimization problems on a quantum computer.

Dr. Rob Hovsapian, a research advisor at NREL, called the new capability an important step toward understanding how quantum computers can better balance energy loads across an electric grid. 

“Electric grids are increasingly complex as we add new power generation resources such as wind and solar, electric vehicle charging, sensors and other devices,” he said.  “We are reaching the point where electric grids have more inputs and outputs than what our classical computing models can handle. By incorporating quantum computing into our testing platform, we can begin exploring how this technology could help solve certain problems.”

Optimization problems such as managing supply chains, devising more efficient transportation routes, and improving electric grid and telecommunications networks are considered “killer applications” for quantum computing. These are large-scale problems with numerous factors and variables involved, which makes them well suited for quantum computers and the way in which they run calculations. 

Keeping power flowing across an electric grid is a good example of an optimization problem. Power plants, wind turbines, and solar farms must generate enough electricity to meet demand, which can fluctuate depending on the time of day and weather conditions.  This electricity is then routed across miles and miles of transmission lines and delivered to homes, businesses, hospitals, and other facilities in real time.

Initially, NREL and Atom Computing are exploring how quantum computing can improve decision making on the re-routing of power between feeder lines that carry electricity from a substation to a local or regional service area in the event of switch or line downtime. 

“Right now, operators primarily rely on their own experience to make this decision,” Hovsapian said. “This works but it doesn’t necessarily result in an optimal solution.  We are evaluating how a quantum computer can provide better data to make these decisions.”

Atom Computing CEO Rob Hays called the project an important example of how private industry and national laboratories can collaborate on quantum computing technology and valuable use case development. 

“Collaborations like this are extremely important for advancing quantum computing and scientific research,” Hays said.  “NREL is a global leader in renewable energy and electric grids.  We are proud to partner with them to advance their research.”

To learn more about Atom Computing visit: https://atom-computing.com.

###

About Atom Computing

Atom Computing is building scalable quantum computers with atomic arrays of optically trapped neutral atoms. We collaborate with researchers, organizations, governments, and companies to help develop quantum-enabled tools and solutions; and support the growing global ecosystem. Learn more at atom-computing.com, and follow us on LinkedIn and Twitter.