A correção de erros quânticos é um aspecto fundamental da computação quântica, sem a qual as computações quânticas em larga escala são praticamente inviáveis.
Um aspecto da computação quântica tolerante a falhas que é freqüentemente mencionado é que cada protocolo de correção de erros associou um limite de taxa de erro . Basicamente, para que um determinado cálculo seja protegido contra erros através de um determinado protocolo, a taxa de erro dos portões deve estar abaixo de um determinado limite.
Em outras palavras, se as taxas de erro de portas simples não forem baixas o suficiente, não será possível aplicar protocolos de correção de erros para tornar o cálculo mais confiável.
Por que é isso? Por que não é possível reduzir as taxas de erro que ainda não são muito baixas?
Respostas:
Queremos comparar um estado de saída com algum estado ideal, portanto, normalmente, fidelidade,F(|ψ⟩,ρ) é usado como este é uma boa maneira de dizer o quão bem os possíveis resultados de medição de ρ comparar com os possíveis resultados de medição de |ψ⟩ , onde |ψ⟩ é o estado de saída ideal e ρ é alcançado o estado (potencialmente misto) depois de algum processo de ruído. Como estamos comparando estados, isto é
Descrevendo ambos os processos de correcção de ruído e de erro utilizando operadores Kraus, onde é o canal de ruído com operadores Kraus N i e E é o canal de correcção de erros com Kraus operadores E j , o estado depois de ruído é ρ ' = N ( | ip ⟩ ⟨ ip | ) = Σ i N i | ip ⟩ ⟨ ip | N † i e o estado após a correção de ruído e erro é ρ = E ∘N Ni E Ej
A fidelidade deste é dada por
Para que o protocolo de correção de erros seja de alguma utilidade, queremos que a fidelidade após a correção de erros seja maior que a fidelidade após o ruído, mas antes da correção de erros, para que o estado de correção de erros seja menos distinguível do estado não corrigido. Ou seja, queremos Isso dá √
Dividindo na parte corrigível, N c , para os quais E ∘ N c ( | ip ⟩ ⟨ ip | ) = | ip ⟩ ⟨ ip | e a parte não corrigível, N N C , para o qual E ∘ N N C ( | ip ⟩ ⟨ ip | ) = σ . Denotando a probabilidade do erro ser corrigível como P cN Nc E∘Nc(|ψ⟩⟨ψ|)=|ψ⟩⟨ψ| Nnc E∘Nnc(|ψ⟩⟨ψ|)=σ Pc e não corrigível (ou seja, muitos erros de ter ocorrido a reconstruir o estado ideal) como dá Σ i , j | ⟨ Ip | E j N i | ip ⟩ | 2 = P c + P n c ⟨ ip | σ | ip ⟩ ≥ P c , onde a igualdade será assumida pelo assumindo ⟨ ip | σ | ip ⟩ = 0Pnc
Para qubits, com uma probabilidade (igual) de erro em cada qubit como p ( nota : este não é o mesmo que o parâmetro ruído, que teria que ser usado para calcular a probabilidade de um erro), a probabilidade de ter um O erro corrigível (assumindo que os n qubits foram usados para codificar k qubits, permitindo erros em até t qubits, determinado pelo limite de Singleton n - k ≥ 4 t ) é Pn p n k t n−k≥4t
Edit from comments:
AsPc+Pnc=1 , this gives
Plugging this in as above further gives
This also shows that, although error correction can increase the fidelity, it can't increase the fidelity to1 , especially as there will be errors (e.g. gate errors from not being able to perfectly implement any gate in reality) arising from implementing the error correction. As any reasonably deep circuit requires, by definition, a reasonable number of gates, the fidelity after each gate is going to be less than the fidelity of the previous gate (on average) and the error correction protocol is going to be less effective. There will then be a cut-off number of gates at which point the error correction protocol will decrease the fidelity and the errors will continually compound.
This shows, to a rough approximation, that error correction, or merely reducing the error rates, is not enough for fault tolerant computation, unless errors are extremely low, depending on the circuit depth.
fonte
There is a good mathematical answer already, so I'll try and provide an easy-to-understand one.
Quantum error correction (QEC) is a (group of) rather complex algorithm(s), that requires a lot of actions (gates) on and between qubits. In QEC, you pretty much connect two qubits to a third helper-qubit (ancilla) and transfer the information if the other two are equal (in some specific regard) into that third qubit. Then you read that information out of the ancialla. If it tells you, that they are not equal, you act on that information (apply a correction). So how can that go wrong if our qubits and gates are not perfect?
QEC can make the information stored in your qubits decay. Each of these gates can decay the information stored in them, if they are not executed perfectly. So if just executing the QEC destroys more information than it recovers on average, it's useless.
You think you found an error, but you didn't. If the comparison (execution of gates) or the readout of the information (ancilla) is imperfect, you might obtain wrong information and thus apply "wrong corrections" (read: introduce errors). Also if the information in the ancillas decays (or is changed by noise) before you can read it out, you will also get wrong readout.
The goal of every QEC is obviously to introduce less errors than it corrects for, so you need to minimize the aforementioned effects. If you do all the math, you find pretty strict requirements on your qubits, gates and readouts (depending on the exact QEC algorithm you chose).
fonte
Classical Version
Think about a simple strategy of classical error correction. You've got a single bit that you want to encode,
On the other hand, if we know thatp>12 , we can still do the correction. You'd just be implementing a minority vote! The point, however, is that you have to do completely the opposite operation. There's a sharp threshold here that shows, at the very least, that you need to know which regime you're working in.
For fault-tolerance, things get messier: the01010 string that you got might not be what the state actually is. It might be something different, still with some errors that you have to correct, but the measurements you've made in reading the bits are also slightly faulty. Crudely, you might imagine this turns the sharp transition into an ambiguous region where you don't really know what to do. Still, if error probabilities are low enough, or high enough, you can correct, you just need to know which is the case.
Quantum Version
In general, things get worse in the quantum regime because you have to deal with two types of errors: bit flip errors (X ) and phase flip errors (Z ), and that tends to make the ambiguous region bigger. I won't go further into details here. However, there's a cute argument in the quantum regime that may be illuminating.
Imagine you have the state of a single logical qubit stored in a quantum error correcting code|ψ⟩ across N physical qubits. It doesn't matter what that code is, this is a completely general argument. Now imagine there's so much noise that it destroys the quantum state on ⌈N/2⌉ qubits ("so much noise" actually means that errors happen with 50:50 probability, not close to 100% which, as we've already said, can be corrected). It is impossible to correct for that error. How do I know that? Imagine I had a completely noiseless version, and I keep ⌊N/2⌋ qubits and give the remaining qubits to you. We each introduce enough blank qubits so that we've got N qubits in total, and we run error correction on them.
If it were possible to perform that error correction, the outcome would be that both of us would have the original state |ψ⟩ . We would have cloned the logical qubit! But cloning is impossible, so the error correction must have been impossible.
fonte
To me there seem to be two parts of this question (one more related to the title, one more related to the question itself):
1) To which amount of noise are error correction codes effective?
2) With which amount of imperfection in gates can we implement fault-tolerant quantum computations?
Let me firs stress the difference: quantum error correction codes can be used in many different scenarios, for example to correct for losses in transmissions. Here the amount of noise mostly depends on the length of the optical fibre and not on the imperfection of the gates. However if we want to implement fault-tolerant quantum computation, the gates are the main source of noise.
On 1)
Error correction works for large error rates (smaller than1/2 ). Take for example the simple 3 qubit repetition code. The logical error rate is just the probability for the majority vote to be wrong (the orange line is f(p)=p for comparison):
So whenever the physical error ratep is below 1/2 , the logical error rate is smaller than p . Note however, that is particularly effective for small p , because the code changes the rate from O(p) to a O(p2) behaviour.
On 2)
We want to perfrom arbitrarily long quantum computations with a quantum computer. However, the quantum gates are not perfect. In order to cope with the errors introduced by the gates, we use quantum error correction codes. This means that one logical qubit is encoded into many physical qubits. This redundancy allows to correct for a certain amount of errors on the physical qubits, such that the information stored in the logical qubit remains intact. Bigger codes allow for longer calculations to still be accurate. However, larger codes involve more gates (for example more syndrome measurements) and these gates introduce noise. You see there is some trade-off here, and which code is optimal is not obvious.
If the noise introduced by each gate is below some threshold (the fault-tolerance or accuracy threshold), then it is possible to increase the code size to allow for arbitrarily long calculations. This threshold depends on the code we started with (usually it is iteratively concatenated with itself). There are several ways to estimate this value. Often it is done by numerical simulation: Introduce random errors and check whether the calculation still worked. This method typically gives threshold values which are too high. There are also some analytical proofs in the literature, for example this one by Aliferis and Cross.
fonte
You need a surprisingly large number of quantum gates to implement a quantum error correcting code in a fault-tolerant manner. One part of the reason is that there are many errors to detect since a code that can correct all single qubit errors already requires 5 qubits and each error can be of three kinds (corresponding to unintentional X, Y, Z gates). Hence to even just correct any single qubit error, you already need logic to distinguish between these 15 errors plus the no-error situation:XIIII , YIIII , ZIIII , IXIII , IYIII , IZIII , IIXII , IIYII , IIZII , IIIXI , IIIYI , IIIZI , IIIIX , IIIIY , IIIIZ , IIIII where X , Y , Z are the possible single qubit errors and I (identity) denotes the no-error-for-this-qubit situation.
The main part of the reason is, however, that you cannot use straight-forward error detection circuitry: Every CNOT (or every other nontrivial 2 or more qubit gate) forwards errors in one qubit to another qubit which would be disastrous for the most trivial case of a single qubit error correcting code and still very bad for more sophisticated codes. Hence a fault-tolerant (useful) implementation of needs even more effort than one might naively think.
With many gates per error correcting step, you can only permit a very low error rate per step. Here yet another problem arises: Since you may have coherent errors, you must be ready for the worst case that an errorϵ propagates not as Nϵ after N single qubit gates but as N2ϵ . This value must remain sufficiently low such that you overall gain after correcting some (but not all) errors, for example single qubit errors only.
An example for a coherent error is an implementation of a gateG that does, to first order, not simply G but G+ϵ√X which you might call an error of ϵ because that is the probability corresponding to the probability amplitude ϵ√ and hence the probability that a measurement directly after the gate reveals that it acted as the error X . After N applications of this gate, again to first order, you have actually applied GN+Nϵ√GNX (if G and X commute, otherwise a more complicated construct that has N distinct terms proportional to ϵ√ ). Hence you would, if measuring then, find an error probability of N2ϵ .
Incoherent errors are more benign. Yet if one must give a single value as an error threshold, then one cannot choose to only assume benign errors!
fonte