/
Lecture 3: Quantum simulation algorithms Lecture 3: Quantum simulation algorithms

Lecture 3: Quantum simulation algorithms - PowerPoint Presentation

giovanna-bartolotta
giovanna-bartolotta . @giovanna-bartolotta
Follow
449 views
Uploaded On 2015-10-26

Lecture 3: Quantum simulation algorithms - PPT Presentation

Dominic Berry Macquarie University We want to simulate the evolution The Hamiltonian is a sum of terms   Simulation of Hamiltonians Seth Lloyd 1996 We can perform For short times we ID: 173101

walk quantum matrix hamiltonian quantum walk hamiltonian matrix edge node graph time state complexity nand linear phase estimate berry

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Lecture 3: Quantum simulation algorithms" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Lecture 3: Quantum simulation algorithms

Dominic Berry

Macquarie UniversitySlide2

We want to simulate the evolution

The Hamiltonian is a

sum of terms:

 

Simulation of Hamiltonians

Seth Lloyd

1996

We

can perform

For

short times

we

can use

For long times

 Slide3

For

short times we

can use

This approximation is because

If we divide long time

into

intervals, then

Typically, we want to simulate a system with some maximum allowable error

.

Then we need

.

 

Simulation of Hamiltonians

Seth Lloyd

1996Slide4

Higher-order simulation

A higher-order decomposition is

If we divide long time

into

intervals, then

Then

we need

.

General product formula can give error

for time

.

F

or time

the error is

To bound the error as

the value of

scales as

The complexity is

.

 

2007

Berry,

Ahokas

, Cleve, SandersSlide5

Higher-order simulation

The complexity is

.For Sukuki product formulae, we have an additional factor in

The complexity then needs to be multiplied by a further factor of

.The overall complexity scales as

We can also take an optimal value of

, which gives scaling

 

2007

Berry,

Ahokas

, Cleve, SandersSlide6

Solving linear systems

Consider a large system of linear equations:

First assume that the matrix is Hermitian.It is possible to simulate Hamiltonian evolution under for time : .Encode the initial state in the form

 

2009Harrow, Hassidim & Lloyd

The state can also be written in terms of

the eigenvectors

of

as

We can obtain the solution

if we can divide each

by

.

Use the phase estimation technique to place the estimate of

in an ancillary register to obtain

 Slide7

Solving linear systems

Use the phase estimation technique to place the estimate of

in an ancillary register to obtainAppend an ancilla and rotate it according to the value of

to obtain

 

2009

Harrow, Hassidim & Lloyd

Invert the phase estimation technique to remove the estimate of

from the ancillary register, giving

Use amplitude amplification to amplify the

component on the

ancilla

, giving a state proportional to

 Slide8

Solving linear systems

What about non-

Hermitian ?Construct a blockwise matrixThe inverse of

is then

This means that

In terms of the state

 

2009

Harrow, Hassidim & LloydSlide9

Solving linear systems

Complexity Analysis

We need to examine:The complexity of simulating the Hamiltonian to estimate the phase.The accuracy needed for the phase estimate.The possibility of being greater than . 2009Harrow, Hassidim & Lloyd

The complexity of simulating the Hamiltonian for time

is approximately

.

To obtain accuracy

in the estimate of

, the Hamiltonian needs to be simulated for time

.

We actually need to multiply the state coefficients by

, to give

To obtain accuracy

in

, we need accuracy

in the estimate of

.

 

Final complexity is

 Slide10

Differential equations

Discretise the differential equation, then encode as a linear system.Simplest discretisation: Euler method.

sets initial condition

sets

x

to be constant

2010

BerrySlide11

Quantum walks

A classical walk has a position which is an integer,

, which jumps either to the left or the right at each step.The resulting distribution is a binomial distribution, or a normal distribution as the limit. 

The quantum

walk has position and coin valuesIt then alternates coin and step operators, e.g.

The position can progress linearly in the number of

steps.

 Slide12

Quantum walk on a graph

The walk position is any node on the graph.

Describe the generator matrix by

The quantity

is the number of edges incident on vertex

.

An edge between

and

is denoted

.

The probability distribution for a continuous walk has the differential equation

 Slide13

Quantum walk on a graph

Quantum mechanically we have

The natural quantum analogue is

We take

Probability is conserved because

is

Hermitian

.

 

1998

FarhiSlide14

Quantum walk on a graph

The

goal is to traverse the graph from entrance to exit.Classically the random walk will take exponential time.For the quantum walk, define a superposition state

On these states the matrix elements of the Hamiltonian are

 

entrance

exit

2002

Childs,

Farhi

,

GutmannSlide15

Quantum walk on a graph

Add random connections between the two trees.

All vertices (except entrance and exit) have degree 3.Again using column states, the matrix elements of the Hamiltonian are

This is a line with a defect.

There are reflections off the defect, but the quantum walk still reaches the exit efficiently. 2003entranceexit

Childs, Cleve, Deotto, Farhi, Gutmann, SpielmanSlide16

NAND tree quantum walk

In a game tree I alternate making moves with an opponent.In this example, if I move first then I can always direct the ant to the sugar cube.

What is the complexity of doing this in general? Do we need to query all the leaves?2007

AND

OR

 

 

 

 

AND

AND

OR

 

 

 

 

AND

AND

Farhi

, Goldstone,

GutmannSlide17

NAND tree quantum walk

2007

ANDOR

 

 

 

 

AND

NAND

NAND

NAND

 

 

 

 

AND

OR

 

 

 

 

AND

NOT

NOT

NOT

NOT

Farhi

, Goldstone,

GutmannSlide18

NAND tree quantum walk

The Hamiltonian is a sum of an oracle Hamiltonian, representing the connections, and a fixed driving Hamiltonian, which is the remainder of the tree.

Prepare a travelling wave packet on the left.If the answer to the NAND tree problem is , then after a fixed time the wave packet will be found on the right.The reflection depends on the solution of the NAND tree problem. 2007

wave

Farhi

, Goldstone,

GutmannSlide19

Simulating quantum walks

A more realistic scenario is that we have an oracle that provides the structure of the graph; i.e., a query to a node returns all the nodes that are connected.

The quantum oracle is queried with a node number and a neighbour number .It returns a result via the quantum operationHere is the

’th

neighbour of . 

wave

query node

connected nodes

 

 

 

 

 Slide20

Decomposing the Hamiltonian

In the matrix picture, we have a sparse matrix.

The rows and columns correspond to node numbers.The ones indicate connections between nodes.The oracle gives us the position of the ’th nonzero element in column . 

 

2003

Aharonov

, Ta-

ShmaSlide21

Decomposing the Hamiltonian

In the matrix picture, we have a sparse matrix.

The rows and columns correspond to node numbers.The ones indicate connections between nodes.The oracle gives us the position of the ’th nonzero element in column .We want to be able to separate the Hamiltonian into 1-sparse parts.This is equivalent to a graph colouring – the graph edges are coloured such that each node has unique colours. 

 

2003

Aharonov

, Ta-

ShmaSlide22

Graph colouring

How do we do this colouring?

First guess: for each node, assign edges sequentially according to their numbering.This does not work because the edge between nodes and may be edge (for example) of , but edge of .Second guess: for edge between and , colour it according to the pair of numbers

, where it is edge

of node and edge of node .We decide the order such that .It is still possible to have ambiguity: say we have

 

2007

Berry,

Ahokas

, Cleve, SandersSlide23

Graph colouring

How do we do this colouring?

First guess: for each node, assign edges sequentially according to their numbering.This does not work because the edge between nodes and may be edge (for example) of , but edge of .Second guess: for edge between and , colour it according to the pair of numbers

, where it is edge

of node and edge of node .We decide the order such that .It is still possible to have ambiguity: say we have

Use

a string of nodes with equal edge colours, and compress.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2007

Berry,

Ahokas

, Cleve, Sanders

 Slide24

General Hamiltonian oracles

More generally, we can perform a colouring on a graph with matrix elements of arbitrary (Hermitian) values.

Then we also require an oracle to give us the values of the matrix elements.

 

 

 

 

 

 

 

 

 

 

 

2003

Aharonov

, Ta-

ShmaSlide25

Simulating 1-sparse case

Assume we have a 1-sparse matrix.How can we simulate evolution under this Hamiltonian?

Two cases:If the element is on the diagonal, then we have a 1D subspace.If the element is off the diagonal, then we need a 2D subspace.

 

2003

Aharonov

, Ta-

ShmaSlide26

Simulating 1-sparse case

We are given a column number

. There are then 5 quantities that we want to calculate: : A bit registering whether the element is on or off the diagonal; i.e. belongs to a 1D or 2D subspace.: The minimum number out of the (1D or 2D) subspace to which belongs.: The maximum number out of the subspace to which

belongs

.: The entries of in the subspace to which belongs.: The evolution under for time in the subspace.We have a unitary operation that maps

 

2003

Aharonov

, Ta-

ShmaSlide27

Simulating 1-sparse case

We have a unitary operation that maps

We consider a superposition of the two states in the subspace,

Then we obtain

A second operation implements the controlled operation based on the stored approximation of the unitary operation

:

This gives us

Inverting the first operation then yields

 

2003

Aharonov

, Ta-

ShmaSlide28

Applications

2007: Discrete query NAND algorithm – Childs, Cleve, Jordan, Yeung

2009: Solving linear systems – Harrow, Hassidim, Lloyd2009: Implementing sparse unitaries – Jordan, Wocjan2010: Solving linear differential equations – Berry2013: Algorithm for scattering cross section – Clader, Jacobs, SprouseSlide29

Implementing unitaries

Construct a Hamiltonian from unitary as

Now simulate evolution under this Hamiltonian

Simulating for time

gives

 

2009

Jordan, WocjanSlide30

Quantum simulation via walks

Three ingredients: 1. A Szegedy quantum walk 2. Coherent phase estimation

3. Controlled state preparationThe quantum walk has eigenvalues and eigenvectors related to those for Hamiltonian.By using phase estimation, we can estimate the eigenvalue, then implement that actually needed.Slide31

Szegedy Quantum Walk

The walk uses two reflections

The first is controlled by the first register and acts on the second register.Given some matrix

, the operator

is defined by

 

2004

SzegedySlide32

Szegedy Quantum Walk

The diffusion operator

is controlled by the second register and acts on the first. Use a similar definition with matrix .Both are controlled reflections:

The eigenvalues and eigenvectors of the step of the quantum walk

are related to those of a matrix formed from

and

.

 

2004

SzegedySlide33

Szegedy walk for

simulation

Use symmetric system, withThen eigenvalues and eigenvectors are related to those of Hamiltonian.In reality we need to modify to “lazy” quantum walk, with

Grover

preparation gives

 

2012

Berry, ChildsSlide34

Szegedy walk for

simulation

Three step process: 1. Start with state in one of the subsystems, and perform controlled state preparation. 2. Perform steps of quantum walk to approximate Hamiltonian evolution. 3. Invert controlled state preparation, so final state is in one of the subsystems.Step 2 can just be performed with small for lazy quantum walk, or can use phase estimation.  

A Hamiltonian has eigenvalues

, so evolution under the Hamiltonian has eigenvalues

is the step of a quantum walk, and has eigenvalues

 

The complexity is the maximum of

 

2012

Berry, Childs