정보통신 IT/정보 Info

구글, 양자 컴퓨터, 칩, 윌로우(Willow), 양자 오류 수정, 10자(10의 25승) 년, quantum chip

Jobs 9 2024. 12. 12. 18:14
반응형

 

 

구글, 최신 양자 컴퓨팅 칩 ‘윌로우’ 공개··· “오류 수정에 획기적 발전”

 

구글(Google)이 새로운 윌로우(Willow) 칩으로 실용적인 양자 컴퓨터 개발 경쟁에서 진전을 이뤘다고 발표했다.

양자 컴퓨터의 혁신적 잠재력 실현은 오류 수정과 컴퓨팅 성능이라는 2가지 핵심 문제로 인해 아직 시간이 필요한 상황이다. 또한 양자 컴퓨팅 기본 정보 단위인 큐비트(양자 비트)의 품질이 장시간 계산을 수행하기에는 아직 충분하지 않다. 

구글 퀀텀 AI 연구소의 설립자이자 책임자인 하트무트 네븐은 새로운 양자 칩 윌로우를 통해 2가지 획기적인 발전을 이뤘다고 설명했다. 

첫째, “윌로우는 큐비트 수를 늘릴 때 오류를 크게 줄일 수 있어 약 30년간 연구되어 온 양자 오류 수정의 핵심 과제를 해결한다.” 
둘째, “윌로우는 표준 벤치마크 계산을 5분 이내에 수행할 수 있다. 현존하는 가장 빠른 슈퍼컴퓨터가 같은 계산을 수행하려면 우주의 나이를 훨씬 초과하는 10자(10의 25승) 년이 필요하다.”

 

 

오류 수정 개선
구글은 성능 측정을 위해 양자 AI 연구소에서 자체 개발한 무작위 회로 샘플링(RCS) 벤치마크를 사용했다. RCS는 양자 컴퓨팅 분야에서 널리 활용되고 있다. 


컴퓨팅 성능 향상보다 더 중요한 것은 오류 수정의 발전이다. 지금까지는 큐비트 수가 증가할수록 오류 발생 빈도도 함께 증가한다는 점이 큰 문제였다.

네븐은 “윌로우에서 더 많은 큐비트를 사용할수록 오류가 줄어들고 시스템이 더욱 양자화된다”라고 밝혔다.

구글 연구팀은 3×3 인코딩된 큐비트 그리드에서 시작해 5×5, 7×7 그리드로 확장하며 점점 더 큰 물리적 큐비트 배열을 테스트했다. 그 결과, 최신 양자 오류 수정 기술을 통해 매번 오류율을 절반으로 줄일 수 있었다. 즉, 오류율이 눈에 띄게 감소했다.


가까워지고 있는 상용 앱
연구진은 추가적인 진전도 이뤄냈다. 큐비트 배열의 품질이 개선되어 개별 물리적 큐비트보다 수명이 훨씬 길어졌다. 이는 양자 컴퓨터가 더 오랜 시간 동안 계산을 수행할 수 있게 됐다.

네븐은 확장 가능한 논리적 큐비트의 가장 설득력 있는 프로토타입이라면서, 실용적이고 매우 큰 규모의 양자 컴퓨터를 실제로 구축할 수 있다는 신호라고 설명했다. 네븐은 윌로우가 기존 컴퓨터로는 복제할 수 없는 실용적이고 상업적으로 의미 있는 알고리즘의 구현을 가능하게 할 것이라고 주장했다.

올해 초 마이크로소프트는 논리적 큐비트 생성 기록을 깬 큐비트 가상화 시스템으로 양자 컴퓨팅 분야의 돌파구를 마련했다고 발표했으며, 상용화를 목표로 하고 있다.


최근 중국 연구진이 양자 컴퓨터로 RSA 암호화를 해독하는 등 다양한 발전이 이뤄지면서, 자문 기관들은 CIO와 CISO에게 양자 후 암호화 시대의 복원력 계획을 준비할 것을 촉구했다.





구글, 초고성능 양자칩 ‘윌로우’ 공개

 

구글이 초고성능 양자컴퓨터용 반도체 윌로우(Willow)’를 공개하며 양자 컴퓨팅 기술의 새로운 가능성을 제시했다.

크립토-이코노미에 따르면, 이번 발전은 양자 컴퓨팅이 기존 컴퓨팅 한계를 뛰어넘는 잠재력을 갖고 있음을 보여준 가운데, 특히 비트코인 및 기타 암호화폐의 보안에 미칠 영향에 대해 논쟁을 불러일으키고 있다. 

양자 컴퓨팅의 한계 극복

윌로우는 전통적인 컴퓨터로는 불가능하다고 여겨졌던 기술적 과제를 해결했다. 구글은 윌로우가 일반적인 슈퍼컴퓨터로는 수십억 년이 걸릴 계산을 단 5분 만에 해결했다고 밝혔다.

이는 윌로우가 기존 기술과 비교할 때 엄청난 계산 능력을 보여줬음을 의미한다.

그러나 윌로우의 105큐빗은  암호화폐 보안을 위협하기에는 여전히 부족하다는 평가다.


비트코인 보안, 아직은 안전 

현재 비트코인과 암호화폐는 ECDSA(타원 곡선 디지털 서명 알고리즘) 및 SHA-256 해시 알고리즘으로 보호받고 있다. 이는 개인 키를 보호하고 디지털 서명을 가능하게 하며, 블록체인의 무결성과 채굴 메커니즘을 보장한다. 

이러한  암호화 기술을 깨뜨리려면 수백만 큐빗이 필요한 것으로 알려져 있으며, 이는 윌로우의 105큐빗을 훨씬 뛰어넘는 수준이다. 따라서 윌로우가 단기적으로 비트코인과 같은 암호화폐의 보안에 실질적인 위협을 가하지는 않을 것으로 보인다. 

양자 저항 솔루션의 등장

양자 컴퓨팅 위협에 대응하기 위해 양자 저항 암호화 기술이 개발되고 있다. 이러한 기술은 블록체인 네트워크에 근본적인 변화를 요구할 수 있다. 예를 들어 △블록 크기 증가 △하드포크(체인 분리) 등이 필요할 수 있다.  

현재 여러 개발자들이 미래의 양자 컴퓨팅 위협에 대비해 비트코인 및 기타 네트워크를 준비하고 있다. 일부 전문가들은 실질적인 양자 위협이 10년 이내에 등장할 수 있다고 추측하지만, 대부분은 이러한 위험이 여전히 먼 미래의 일이라고 보고 있다.

윌로우는 양자 컴퓨팅 연구의 중요한 진전을 보여줬지만, 당장은 글로벌 암호화 보안에 큰 위협을 가하지 않는다. 구글의 이번 발표는 양자 기술 발전의 속도를 확인시켜주는 동시에 암호화폐 커뮤니티가 지속적으로 기술의 변화를 주시하고 대비해야 할 필요성을 강조하고 있다.

 

 

 

Willow, latest quantum chip. Willow has state-of-the-art performance across a number of metrics, enabling two major achievements. 

The first is that Willow can reduce errors exponentially as we scale up using more qubits. This cracks a key challenge in quantum error correction that the field has pursued for almost 30 years. 


Second, Willow performed a standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion (that is, 1025) years — a number that vastly exceeds the age of the Universe. 


The Willow chip is a major step on a journey that began over 10 years ago. When I founded Google Quantum AI in 2012, the vision was to build a useful, large-scale quantum computer that could harness quantum mechanics — the “operating system” of nature to the extent we know it today — to benefit society by advancing scientific discovery, developing helpful applications, and tackling some of society's greatest challenges. As part of Google Research, our team has charted a long-term roadmap, and Willow moves us significantly along that path towards commercially relevant applications .

Exponential quantum error correction — below threshold!
Errors are one of the greatest challenges in quantum computing, since qubits, the units of computation in quantum computers, have a tendency to rapidly exchange information with their environment, making it difficult to protect the information needed to complete a computation. Typically the more qubits you use, the more errors will occur, and the system becomes classical. 

Today in Nature, we published results showing that the more qubits we use in Willow, the more we reduce errors, and the more quantum the system becomes. We tested ever-larger arrays of physical qubits, scaling up from a grid of 3x3 encoded qubits, to a grid of 5x5, to a grid of 7x7 — and each time, using our latest advances in quantum error correction, we were able to cut the error rate in half. In other words, we achieved an exponential reduction in the error rate. This historic accomplishment is known in the field as “below threshold” — being able to drive errors down while scaling up the number of qubits. You must demonstrate being below threshold to show real progress on error correction, and this has been an outstanding challenge since quantum error correction was introduced by Peter Shor in 1995. 

There are other scientific “firsts” involved in this result as well. For example, it’s also one of the first compelling examples of real-time error correction on a superconducting quantum system — crucial for any useful computation, because if you can’t correct errors fast enough, they ruin your computation before it’s done. And it’s a "beyond breakeven" demonstration, where our arrays of qubits have longer lifetimes than the individual physical qubits do, an unfakable sign that error correction is improving the system overall.

As the first system below threshold, this is the most convincing prototype for a scalable logical qubit built to date. It’s a strong sign that useful, very large quantum computers can indeed be built. Willow brings us closer to running practical, commercially-relevant algorithms that can’t be replicated on conventional computers.

10 septillion years on one of today’s fastest supercomputers
As a measure of Willow’s performance, we used the random circuit sampling (RCS) benchmark. Pioneered by our team and now widely used as a standard in the field, RCS is the classically hardest benchmark that can be done on a quantum computer today. You can think of this as an entry point for quantum computing — it checks whether a quantum computer is doing something that couldn’t be done on a classical computer. Any team building a quantum computer should check first if it can beat classical computers on RCS; otherwise there is strong reason for skepticism that it can tackle more complex quantum tasks. We’ve consistently used this benchmark to assess progress from one generation of chip to the next — we reported Sycamore results in October 2019 and again recently in October 2024.

Willow’s performance on this benchmark is astonishing: It performed a computation in under five minutes that would take one of today’s fastest supercomputers 1025 or 10 septillion years. If you want to write it out, it’s 10,000,000,000,000,000,000,000,000 years. This mind-boggling number exceeds known timescales in physics and vastly exceeds the age of the universe. It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch. 

These latest results for Willow, as shown in the plot below, are our best so far, but we’ll continue to make progress. 

A chart comparing the performance of different quantum computing platforms, on the task of random circuit sampling (RCS).
Computational costs are heavily influenced by available memory. Our estimates therefore consider a range of scenarios, from an ideal situation with unlimited memory (▲) to a more practical, embarrassingly parallelizable implementation on GPUs (⬤). 

Our assessment of how Willow outpaces one of the world’s most powerful classical supercomputers, Frontier, was based on conservative assumptions. For example, we assumed full access to secondary storage, i.e., hard drives, without any bandwidth overhead — a generous and unrealistic allowance for Frontier. Of course, as happened after we announced the first beyond-classical computation in 2019, we expect classical computers to keep improving on this benchmark, but the rapidly growing gap shows that quantum processors are peeling away at a double exponential rate and will continue to vastly outperform classical computers as we scale up. 

A video discussion with Principle Scientist Sergio Boixo, Founder and Lead Hartmut Neven, and physicist John Preskill on using random circuit sampling as a benchmark to demonstrate beyond-classical performance in quantum computers. 
5:59
A video with Principal Scientist Sergio Boixo, Founder and Lead Hartmut Neven, and renowned physicist John Preskill discussing random circuit sampling, a benchmark that demonstrates beyond-classical performance in quantum computers. 

State-of-the-art performance
Willow was fabricated in our new, state-of-the-art fabrication facility in Santa Barbara — one of only a few facilities in the world built from the ground up for this purpose. System engineering is key when designing and fabricating quantum chips: All components of a chip, such as single and two-qubit gates, qubit reset, and readout, have to be simultaneously well engineered and integrated. If any component lags or if two components don't function well together, it drags down system performance. Therefore, maximizing system performance informs all aspects of our process, from chip architecture and fabrication to gate development and calibration. The achievements we report assess quantum computing systems holistically, not just one factor at a time. 

We’re focusing on quality, not just quantity — because just producing larger numbers of qubits doesn’t help if they’re not high enough quality. With 105 qubits, Willow now has best-in-class performance across the two system benchmarks discussed above: quantum error correction and random circuit sampling. Such algorithmic benchmarks are the best way to measure overall chip performance. Other more specific performance metrics are also important; for example, our T1 times, which measure how long qubits can retain an excitation — the key quantum computational resource — are now approaching 100 µs (microseconds). This is an impressive ~5x improvement over our previous generation of chips. If you want to evaluate quantum hardware and compare across platforms, here is a table of key specifications:

a table chart reading "Willow System Metrics" with columns showing details like number of qubits (105) and average connectivity (3.47)
Willow’s performance across a number of metrics. 

What’s next with Willow and beyond
The next challenge for the field is to demonstrate a first "useful, beyond-classical" computation on today's quantum chips that is relevant to a real-world application. We’re optimistic that the Willow generation of chips can help us achieve this goal. So far, there have been two separate types of experiments. On the one hand, we’ve run the RCS benchmark, which measures performance against classical computers but has no known real-world applications. On the other hand, we’ve done scientifically interesting simulations of quantum systems, which have led to new scientific discoveries but are still within the reach of classical computers. Our goal is to do both at the same time — to step into the realm of algorithms that are beyond the reach of classical computers and that are useful for real-world, commercially relevant problems. 

an illustrated chart reading "Random Circuit Sampling (RCS): in context
Random circuit sampling (RCS), while extremely challenging for classical computers, has yet to demonstrate practical commercial applications. 
 
We invite researchers, engineers, and developers to join us on this journey by checking out our open source software and educational resources, including our new course on Coursera, where developers can learn the essentials of quantum error correction and help us create algorithms that can solve the problems of the future. 

an illustrated card reading "Our quantum computing roadmap" and a timeline showing 6 milestones from "Beyond classical" to "Large error-corrected quantum computer"
My colleagues sometimes ask me why I left the burgeoning field of AI to focus on quantum computing. My answer is that both will prove to be the most transformational technologies of our time, but advanced AI will significantly benefit from access to quantum computing. This is why I named our lab Quantum AI. Quantum algorithms have fundamental scaling laws on their side, as we’re seeing with RCS. There are similar scaling advantages for many foundational computational tasks that are essential for AI. So quantum computation will be indispensable for collecting training data that’s inaccessible to classical machines, training and optimizing certain learning architectures, and modeling systems where quantum effects are important. This includes helping us discover new medicines, designing more efficient batteries for electric cars, and accelerating progress in fusion and new energy alternatives. Many of these future game-changing applications won’t be feasible on classical computers; they’re waiting to be unlocked with quantum computing. 

반응형