/
Noisy Connections: A Survey of Interactive Coding and its Borders with Other Topics Noisy Connections: A Survey of Interactive Coding and its Borders with Other Topics

Noisy Connections: A Survey of Interactive Coding and its Borders with Other Topics - PowerPoint Presentation

natalia-silvester
natalia-silvester . @natalia-silvester
Follow
356 views
Uploaded On 2018-03-14

Noisy Connections: A Survey of Interactive Coding and its Borders with Other Topics - PPT Presentation

Allison Bishop Lewko Columbia University featuring works by Schulman Haeupler Brakerski Kalai Jain Rao Vitercik Dodis Chung Pass Telang TwoParty Computation with Communication Errors ID: 650183

error tree communication hash tree error hash communication errors constant protocol transcript chunk time party short coding interactive adversarial

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Noisy Connections: A Survey of Interacti..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Noisy Connections: A Survey of Interactive Coding and its Borders with Other Topics

Allison Bishop Lewko

Columbia University

featuring works by Schulman,

Haeupler

,

Brakerski

,

Kalai

, Jain, Rao,

Vitercik

,

Dodis

, Chung, Pass,

TelangSlide2

Two-Party Computation with Communication Errors

Bob

Alice

*Sender

does not know an

error

occurred,

Rest of the computation is wrong!

We consider strong adversary who can corrupt

a constant fraction of all bits (fixed communication length).

Input x

Input ySlide3

What Makes Interactive Coding Distinct from Error-Correcting Codes?

Interactive coding problem for 2 parties:

As first formulated and studied by Schulman (1992)

For m rounds of interaction,

just using error-correcting codes can only achieve error rate < 1/mGoal is to get constant relative error rate, and constant (multiplicative) overhead in communicationSlide4

Expressing the Protocol as a Tree

Bob speaks

(function of input x)

Alice speaks

(function of input y)Slide5

Execution of the Protocol with No Errors

Path in tree = transcript of communicationSlide6

Simulating the Protocol Tree Path Under Errors

1

- Errors cause Bob and Alice to have differing views of simulated transcript

Approach:

Provide mechanism to detect disagreement

Provide mechanism to move back toward agreement

Once re-synched, try again to proceed down protocol treeSlide7

Communicating “Pebble” Movements

Each party has a “pebble” it moves around the protocol tree

We can use 4 symbol alphabet for “Down Left”, “Down Right,” “Back Up”, “Hold”

to describe pebbles that move along one branch of the tree at a time (or stay put)

Goal is to communicate the sequence of pebble moves so each party can know where the other party’s pebble is. We want to encode a dynamic string of characters L, R, B, H so that it is decoded properly at moments in time when there are not too many past errors. Slide8

Encoding Movements via Tree Codes [Schulman 92]

Tree code:

Edges labeled by symbols from

Constant-size alphabet

Any two paths have constant-fraction

o

f symbols differing from

lowest common ancestor onwards

Example with alphabet {1,2,3,4,5}

4

4

2

2

5

3

2

31154422332555111

334

411

Example:

Strings 1, 2, 5 and 3, 2, 4

differ in 2 out of 3 symbols.Slide9

Interactive Coding from Tree Codes

Suppose we have a 4-ary tree code:

Encode a sequence of moves “L, R, B, H, …” by the labels of

corresponding edges in the tree code

one symbol = one edge down the tree codeDecode by finding path in tree code of right length and closest Hamming distanceOne technicality: don’t want final pebble moves to change simulated transcript, So can’t hold when we reach bottom of the protocol tree. Need to pad with dummy layers at the bottom (easy enough to do). Slide10

Intuition for Why This Works

Define a good event as both parties correctly decode and know

where the other party’s pebble is.

When this happens, “progress” is made (either in moving forward, or gettingcloser to syncing up)

Bad event is a decoding error. Only a bounded amount of damage done,as pebbles only move one edge at a time. TimeDecoding error of depth LInterval lengthcL with constantfraction of errors

Bad intervals canCover only boundedFraction of timeSlide11

Now That You Think Tree Codes are Cool…

Some bad news:

We don’t know how to efficiently construct them.

Some progress on this: [B12, MS14]

but still no unconditional, poly-time deterministic construction.Randomized constructions are known, but we still want efficient decoding tooSlide12

Efficient Solution: (tiny) TCs + Hashing [BK12]

Provide mechanism to detect disagreement

Provide mechanism to move back toward agreement

Once re-synched, try again to proceed down protocol tree

Let’s revisit the higher level approach:Observation: - We can build short tree codes by brute force in poly time - Naïve concatenation: use TC1 for awhile, then use TC2, etc.Problem: lose ability to detect/correct errors in the more distant pastSolution: Hash entire simulated transcript to detect any lingering disagreementSlide13

[BK12] Protocol Overview

2

5

2

35

1

252

3

51

Chunk

Hash Check

ChunkHash Check…Divide original protocol into smallishc

hunks – use short tree code within eachHash entire simulated transcript so far+ chunk number to detect disagreementBack up when disagreement foundNote: hash length long enough to avoid collisions whp

,and chunk length should be similar to avoid communication blowup from the hash phases.

Short tree code

Short tree codeSlide14

Even Simpler Efficient Solution – no TCs! [H14]

Observation: Hash collisions aren’t really so bad!

If they happen at a constant rate, can really handle them like errors.

We can make the chunks and hashes constant length

now we don’t even need short TCs to get constant error rate with constant communication overhead. Independent hash keys are picked each time, so we can use a Chernoff bound to suitably control overall effect of hash collisions on simulation progress. Slide15

Simplest Protocol Overview

Chunk

Hash Check

Chunk

Hash CheckDivide original protocol into constant sizechunks Hash entire simulated transcript so far+ chunk number to detect disagreementBack up when disagreement found

Note: chunk length should be similar to hash lengthto avoid communication blowup from the hash phases.Hash collisions happen at bounded constant frequency

whp. *Simulated In the clear Slide16

Applications/Extensions:1. Interactive Coding Meets Cryptography

What happens when we apply interactive coding in situations

where we want to preserve more than just correctness and

(roughly) communication complexity?

Example: “Knowledge preserving” interactive coding [CPT13]Question: Can we ensure that parties don’t learn anything more about the other’s input than they would learn in the error free setting? Answer: No! (at least not with a good error-rate).Main intuition is that errors will force a “back track” so that some unnecessary Function of an input will be computed and sent. Slide17

IP = PSPACE over Adversarial Channels [DL]

It turns out:

Correctness and Soundness can be preserved over adversarial channel errors!

Verifier

ProverA natural concern: Can cheating prover use guise of channel errors to avoid answering tough challenges?challenge

response

. . .What?Channel errors!Let’s go back. Main idea:Verifier can pick fresh Randomness after going backAmplification used to preventPoly tries from helping provertoo muchSlide18

Applications/Extensions2. Multi-party Protocols

Interactive coding for

multi-party

protocols [RS94, GMS11, JKL15]

Network of n parties, can communicate via pairwise channelsGoal is to compute a joint function of inputs over channelsMany models: synchronous vs. asynchronous, noisy vs. adversarial, etc.Many measures: communication complexity, computation, rounds, links, etc. Slide19

 

 

 

 

 

 

communication links

Need to

synchronize

 

Problems:

Basic Idea: Reduce to 2-party caseSlide20

Efficiency

preserving

 

 

 

 

 

Resilient to

constant

fraction of

adversarial

error

Constant

blowup in communication

constant

 

Properties:ongoingworkAssumptions: 1. Protocol is semi-adaptive 2. There exists one party connected to all the restOne Approach [JKL15]Slide21

All communication is through

Make each pairwise protocol

error resilient

 

Problem 1: needs to synchronize global protocol.Problem 2: Errors need to be detected fast (after

communication).

   

 

 

 

 

 

Solution 2:

After each (global) chunk of

bits, all parties speak.

 

Use variant of

[Schulman93]

(inefficient)

Use variant of

[Brakerski-K12]?

(efficient)

yields

comm

blowup

 

Use variant of

[Haeupler14]

(efficient)

High-Level DescriptionSlide22

Passing the Burden of Being P* [LV]

Challenge:

P* maintains a view of each pairwise transcript to check consistency – can’t pass these all to a new P* without lots of communication overhead!

Idea:

Replace Hash(Transcript so far) with an iterated hash.Let , be chunks of transcript. Compute hash to check agreement as Hash(Hash(, Hash(

)))*Now we can pass short hashes intead of long transcripts to a new P*to maintain ability to detect prior disagreements.

 Slide23

3. A More Speculative Connection

R

ecently, King and Saia [KS13] presented an expected poly-time

Byzantine Agreement algorithm against a computationally unbounded, adaptive asynchronous adversary[LL13] presented an impossibility result for a kind of “mobile” adversarywho can corrupt a changing set of players and reset their memories upon releasing them to corrupt others.

Intriguing Question:Adversarial network channels can be defined to model each of these adversaries,so can we classify a “worst-case” adversarial network against whichByzantine Agreement is possible?Slide24

4. Connection

Between Formulas and Communication

[KW88]

Bob

Alice

How many bits

need to be sent

in the worst case?Slide25

Communication Complexity = Formula Depth

[KW88]

AND

OR

OR

AND

AND

AND

AND

Alice

Bob

Right

Left

LeftSlide26

Carrying Error-Resilience through the

Karchmer-Wigderson

Connection

[KLR12]

We want:

Error-resilientcomputation

Error-free

computation

Compiler

Error-free

communication

[KW88]

Error-resilient

communication

Compiler

We know:

We build:Slide27

Communication with Errors: An Easier Model (Sender Feedback)

Bob

Alice

*Sender

knows

error occurred

Oops!Slide28

Short-Circuit Errors

AND

OR

OR

AND

AND

AND

AND

0

1

0

0

1

1

1

1

0

1

0

1

1

1

1

0

True output of gate

replaced by value

from one of its inputsSlide29

Recovery from Non-Fatal Short-Circuits

Result: can allow

a fixed constant fraction of errors per path

Example: allow one error per path

Efficiency: formula depth multiplied by a constantSlide30

Some Further Directions

What other kinds of circuit errors can we correct?

What kinds of bounds on size of error-resilient circuits can we prove?

What other properties of 2 or multi-party computations can/can’t

be preserved under channel errors?What are the “right” network adversarial models for various applications?How can we unify this with distributed computing theory where correctness is relaxed and not a fixed function of the inputs?