/
Unified Hardware and Software for Environmental Monitoring Unified Hardware and Software for Environmental Monitoring

Unified Hardware and Software for Environmental Monitoring - PowerPoint Presentation

liane-varnes
liane-varnes . @liane-varnes
Follow
391 views
Uploaded On 2017-09-01

Unified Hardware and Software for Environmental Monitoring - PPT Presentation

Doug Carlson PhD Defense Advisor Dr Andreas Terzis Readers Dr Alex Szalay JHU Dr Omprakash Gnawali University of Houston The Plan Discuss the environmental monitoring application and common usage patterns ID: 584081

networking sensor hardware data sensor networking data hardware nodes cycle link network toast duty routing related radio flooding single

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Unified Hardware and Software for Enviro..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Unified Hardware and Software for Environmental Monitoring Sensor Networks

Doug Carlson

PhD Defense

Advisor: Dr. Andreas

Terzis

Readers: Dr. Alex

Szalay

, JHU

Dr.

Omprakash

Gnawali

, University of HoustonSlide2

The Plan

Discuss the environmental monitoring application and common usage patterns.

Talk about the challenges of this application.

Present hardware and software approaches to tackle these challenges.

2Slide3

Contributions

Hardware suite tailored to environmental monitoring.

Improve

multi-transmitter flooding energy

usage and

throughput.

Build efficient multi-tiered collection protocol with multi-transmitter flooding.

3Slide4

Background

Background

. Hardware. Networking.

4Slide5

System Requirements

Sample

periodically (~O(minutes)

)

Collect data wirelessly

Survive on batteries for months or years

High

yield

requirements (>99%)

Loose latency requirements (hours or days

)

150 m

5

BackgroundSlide6

WSN “Motes”

The dream: Lots of cheap devices replace a few expensive ones.

“Mote”: Microcontroller, radio, storage, analog inputs.

Power consumption: microwatts to 10’s of

milliwatts

Radio range: 10’s to 100’s of meters

6

Design Goal:

Conserve Energy

Approach:

Minimize Radio Usage

BackgroundSlide7

Hardware Challenges

Mix of long and short links

Mix of dense and sparse sensor placement needs

“Dumb” sensors require manual tracking

7

Modular, flexible, and “smart” hardware

BackgroundSlide8

Networking Challenges

Routing with poor link state information and unreliable nodes

Remaining efficient in a large, patchy network.

8

Reliable, low-state data collection with concurrent radio transmissions.

Separate short-range traffic from long-range traffic.

BackgroundSlide9

Hardware

Background.

Hardware

. Networking.

9Slide10

WSN Hardware: Previous Approach

TelosB

mote + External Antenna

One external analog switch, four inputs (one ADC on mote)

10

MCU

Radio

Flash

USB

Telos

Mote

Switch

Analog Voltage

Multiplexer

Fan-out board

Sensor

Sensor

Sensor

Sensor

Analog Voltage

Sensor Assembly

HardwareSlide11

The Breakfast Suite

Bacon mote: Optional amplifier, lower chip count

Toast Multiplexer: store sensor information, daisy-chainable

11

MCU

+

Radio

Flash

Amp

Bacon

MCU

Sensor

Sensor

Sensor

Sensor

Sensor

Sensor

Sensor

Sensor

Toast

Digital (I

2

C)

Toast

Digital (I

2

C)

Toast

HardwareSlide12

Challenge 1: Variable Communication Range Requirements

Dense clusters of expensive sensors in

qualitatively different

regions

Node placement should be driven by science needs, not connectivity

We want to keep costs low and systems simple-- avoid mixing transceivers

(Actually clusters of 4-32 analog sensors)

12

HardwareSlide13

The Bacon Mote: modular RF front-end

Bacon Leaf

Lower power than

Telos

Higher max TX power

Cheaper

Bacon Router

Even higher max TX power

Lower RX threshold

Free-space range of 700m – 24km

13

HardwareSlide14

Challenge 2: Sensor Allocation

Replace 20

Telos

+ 2 relays at USDA with 2 Leaf, 1 Router, 10 Toast

10

Telos

-> 1 Bacon, 5 Toast

10

Telos

-> 1 Bacon, 5 Toast

Remove unnecessary relay

14

HardwareSlide15

Challenge 3: Sensor metadata and provenance

How do you interpret raw voltage measurements?

How do you keep track of sensor calibrations?

Once your hardware leaves the lab, how do you keep track of it?

If you don’t get the metadata right, you don’t get scientifically usable data.

15

HardwareSlide16

Solution: Store types/IDs on Toast MCU

Record identifiers, types, and calibration in Toast and Bacon flash.

Collect this information with sensor samples.

Barcode everything

.

16

Automatic synch between physical and digital world

HardwareSlide17

Breakfast Suite Summary

* Toast bus can be completely powered off between samples

Parameter

TelosB

Bacon-Leaf

Bacon-Router

Toast

Est. Unit Cost

$31

$22

$34

$21

Max Free Space Range

0.2

km

0.7 km24 km /3 kmNA

Active Power6.4 mW5.4 mW5.4 mW5.0

mWShutdown Power

15.3

uW

7.8

uW

8.1

uW

2.1

uW

*

Std.

TX Power

60

mW

51.3

mW

396

mW

NA

RX

Power

60

mW

50.1

mW

62.4

mW

NA

17

HardwareSlide18

Networking

Background. Hardware.

Networking

.

18Slide19

Single-Path Routing

Single destination

Form and maintain routing tree.

One bad link breaks tree.

One bad node breaks tree.

Uncoordinated senders -> interference

19

NetworkingSlide20

How can we make routing resilient to individual node and link failures?

Incorporate node “health” in routing?

Adds complexity, hard to include all factors

Collect more or better link quality data?

Need traffic to measure: this isn’t free.

Send data over a variety of routes?

Coordinating senders to avoid collisions is nontrivial.

Ideally: no single point of failure, minimal coordination between forwarders.

20

NetworkingSlide21

Non-destructive Concurrent Transmissions

0

1

0

1

21

Same data

Same time

NetworkingSlide22

Solution 1: Try ALL paths at the same time

Related: Ferrari et al, 2012, 2011. Lu and Whitehouse, 2009

22

Source

Destination

NetworkingSlide23

Competitive with single path routing!

“Unhelpful” nodes are wasting energy

Packet spacing is very conservative

23

But…

Lack of medium access = much faster than “normal” flooding.

Highly reliable: end-to-end packet reception > 99%

Related: Ferrari et al, 2012, 2011. Lu and Whitehouse, 2009

NetworkingSlide24

Solution 2: Only use “useful” nodes

24

NetworkingSlide25

To me

End-to-end

Identify

Useful

Forwarders with CXFS

w is on some shortest path from S to D if

d(S,w) + d(w,D) = d(S,D)

SOURCE

DESTINATION

Burst Setup

S Floods

D measures d(S,D)

w measures d(S,w)

Setup Acknowledgement

D Floods

D tells d(S,D) to w, S.

w measures d(D,w)

w assumes d(w,D)

Data Forwarding

S Floods

w forwards if

d(S,W) + d(w,D) <= d(S,D)

OTHERWISE

w sleeps.

To Dest

25

Forwarder Selection in Multi-transmitter Networks

Carlson et al, IEEE DCOSS 2012

NetworkingSlide26

Why is this right for Environmental Monitoring?

Improves radio duty cycle

Unused nodes can sleep

Improves data throughput

Space packets based on destination’s distance, not network diameter

26

Link measurements are trivial

Improve resilience to failures over single-path routing

Paths remain even when some nodes fail

Why is this better than flooding?

NetworkingSlide27

Testbed

at a Glance

27

Close Source

Distant

Source

Nodes close to the sink have less impact on the rest of the network.

Nodes close to the sink should see better throughput.

Darker = Used More Frequently

NetworkingSlide28

28

A: Forwarder

D: Forwarder (destination)

S: Forwarder (owner)

B: Non-Forwarder

TX

RX

Time

OWNER

OWNER

OWNER

OWNER

A

D

S

B

S

A

D

B

NetworkingSlide29

Improving Radio Duty Cycle Over Flooding

1 minute packet generation rate (much higher than EM needs)

Flood duty cycle ~3% (competitive with state of the art)

Forwarder selection decreases average duty cycle by 30%

Average

end-to-end PRR > 99

%

29

NetworkingSlide30

Distance and Duty Cycle

Avg Duty Cycle of Forwarders [0, 1.0]

Distance from Source to Dest. (Avg Hops)

30

NetworkingSlide31

Distance and Throughput

Average throughput is 49% higher than simple flooding.

Distance from Source to Dest. (Avg Hops)

Throughput (Normalized to Flood)

Nodes at edge of network pay setup overhead for no spacing benefit.

31

NetworkingSlide32

Improving Reliability over Single-Path Routing

24 hours of link traces

Compute shortest paths and forwarder sets (offline)

Flood data over each of these,

with random node failures added

32

NetworkingSlide33

33

Improve Reliability over Single-Path Routing

NetworkingSlide34

Coordinating large, patchy networks

Limit the impact of adding a new node on the rest of the network.

Reduce energy usage at most devices

OK to increase it at a few

places

34

NetworkingSlide35

Our solution: separate router tier from leaf tier

Leaves buffer data in flash.

Leaves push outstanding data to router when woken up.

Routers buffer leaf data in flash.

Routers push outstanding data to base station when woken up.

Separate channel for each patch.

Separate channel for routers.

Reduce forwarding load and overhead at Leaves

35

NetworkingSlide36

Wakeup: Low Power Probing

TX

RX

TX

TX

RX

TX

RX

RX

TX

TX

TX

RX

TX

0

1

2

3

1

2

3

0

RX

TX

RX

TX

36

Reference: “Koala: Ultra-low Power Data Retrieval in Wireless Sensor Networks.

Musaloiu

et al, 2008.

NetworkingSlide37

Segmented

testbed

37

NetworkingSlide38

Segmentation Duty Cycle Improvement

44% Improvement

Average DC (all): 30% lower

38

Mean Leaf DC: 0.34% -> 0.18%

Mean Router DC: 0.36% -> 0.52%

Leaf

Router

50% Worse

50% Worse

50% Better

NetworkingSlide39

In Conclusion…

Developed a hardware suite tailored to environmental monitoring.

Improved energy

usage and throughput in multi-transmitter flooding.

Built an

efficient

multi-tiered collection protocol with multi-transmitter flooding.

39

Better systems for domain scientists.Slide40

Acknowledgements

NSF-MIRTHE

The Moore Foundation

0546648 CAREER: Towards turnkey sensor networks for the Sciences: Software tools for designing and managing networks of sensors

0754782

IDBR

: An End-to-End Sensor Based System for Environmental

Monitoring

40Slide41

Acknowledgements

My faculty collaborators:

Dr. Andreas

TerzisDr. Alex

Szalay

Dr.

Omprakash

Gnawali

Dr.

Katalin

Szlavecz

Current and past HINRG lab matesMarcus, Yin, Jay, Razvan, Mike, Lim, John, Andong, Da, and VictorThe rest of the LUYF/EPS teamScott, Mike, Lijun, Chih-han41Slide42

Acknowledgements

This stupid cat

Andreas’s frogs

Kelly BoydDual record holder for best human and most understanding girlfriend of all time

All of my families, friends, and teachers

42Slide43

Questions/comments?

43Slide44

Segmentation Duty Cycle Improvement: Overhead Only

44Slide45

Challenge 2: Sensor Allocation

MCU has a fixed number of ADC inputs.

How can you effectively support both sparse and dense sensor needs?

Node

Relay

Uplink

20

Telos

motes, most within ~5 m groups

45Slide46

Spoiler Alert

Hardware to simplify structure and deployment of spatially heterogeneous (or “patchy”) networks

Use non-destructive concurrent radio transmissions to build reliable and efficient network protocols

Incorporate patchiness and normally-disconnected state into collection protocol

46Slide47

Toast: a digital, chainable MUX board

Connects up to 8 analog sensors to each slave MCU.

I

2

C digital communication

Automatic device discovery

47Slide48

What are we working with?

Limited monetary budget, favors mote-class devices

Microcontroller, radio, flash storage, sensors

Limited power budget (battery powered)

Idle power in microwatt range, active power in 10’s of

milliwatts

Radio range of 10’s to 100’s of meters

Typically the largest contributor to power budget

Local data buffering in flash

Some fixed number of ADC inputs

48Slide49

The Breakfast Suite

Matches deployment patterns

Simplifies manual tasks

49Slide50

Eliminate Manual Tracking Tasks

Field

Database

Lab

Data

Calibrate

Toast

Register sensors

Sensor associations

Record sensor associations

Enter sensor associations

Deploy

Deploy

50Slide51

Routing with bad information: The Big Idea

Problem: A multi-hop route to fail, and they’re not easy to characterize.

Solution: Identify many potentially useful forwarders and send data

overthem

simultaneously.

Saving power

=

longer and more informative deployments

51Slide52

Larger Networks: More Data Forwarding

AT: Since you don’t like this, I would skip it. It takes

a lot of time to say what’s fairly obvious if it’s talking

about number of packets non-leaf nodes forward. Go

straight to the next slide instead

52Slide53

How can we do the same with data packets?

Careful timing

We have access to a 26 MHz crystal

We can use a timer capture module to record the time that the packet preamble is sent or received

Modify packet contents deterministically

OK to increment hop-count, decrement TTL, compute checksums, and apply forward error correction.

53Slide54

Do we even need link quality information?

How can we get data from a leaf to a router without knowing local LQ or distances?

54Slide55

Networking Problem 2: Disconnected Operation

What should leaf nodes do when their router disappears?

What should routers do when the base station disappears?

How can you re-associate nodes to the network?

55Slide56

Cub Hill DCSlide57
Slide58

Unstable nodes complicate route selection

Figure 1

58Slide59

Impact of instability on route selection

59Slide60

Improving link

quality information improves duty

cycle

Figure 1

60Slide61

61

Receiver

Senders

R

-

-

R

DP

DP

DP

R

T

H

H

R

RC

T

H

R

-

RC

RC

Receiver

Senders

DP

Any data pending?

H

Did anybody pick heads?

T

Did anybody pick tails?

RC

You are in the final selection.Slide62

62

Expected final contention is 2

Completion in log(N

sender

) rounds

ConvergenceSlide63

63

Contention Reduction and Completion Time

Median final contention is at or below 2

Roughly logarithmic completion timeSlide64

64

In practice, would retry after protocol failure

Test conflates poor link quality with higher density

Good routing should help to mitigate this

Successes and Failures

Poor link quality during working hoursSlide65

65

Loss Impact

X

X

X

X

X

XSlide66

66

Selection Bias

Per-node (bi-directional) PRR drawn from Gaussian distribution with mean 0.5Slide67

67

Time Overhead

16 ms per round + 1.6 ms setup

With full debugging/modular radio stackSlide68

68

RAM/ROM Overhead

24 KB ROM

3 KB RAM

Full debug instrumentation (serial, GPIO, internal timing) + test applicationSlide69

69

Link Asymmetry

Modest asymmetry possible

Favors better receiver-to-sender linkSlide70

70

Link Quality Variability

Good

nodes extend the process

Longer negotiations lead to fewer nodes in contentionSlide71

71

Types of Failure

More weak links, more failures

Failures are not necessarily more prominent in the initial probe (highest contention) as contention increasesSlide72

Psel: probability that a sender selects the same NC PA as the

receiver

sender

selecting the same NC as the receiver k times is simply p

sel

k

. number

of nodes remaining at the

kth

round, given an initial number of senders n, with a simple binomial distribution:

R(n

, k,

psel

) = B(n, pksel)Terminate with Pna(m) (no-acknowledgements received from m actively-negotiating nodes):

Pna(m) = (1 − psel)

mCombining these, we get

Ps(k, n), the probability of stopping on the kth round, when start- ing with an initial

density of n:Ps

(k, n) =

SUM_j

=0^n

r

(j, n, k − 1,

p

sel

)P

na

(j

)

r

(j, n, k, p)

is

PMF

of

R

(n, k, p).

E

xpected

remaining

:

E

(

n

na

) =

SUM_i

=1^n i ·

P

na

(i

)

72Slide73

Hop-Count Symmetry

Network Diameter: ~7 hops (-6 dBm)

Most pairwise distances were equal

Worst case: 2 hops different.Slide74

Synchronization Precision

1 Transmitter

2 Forwarders, F1 and F2

Connect forwarders to logic analyzer

Measure difference between F1 and F2 SFD-sent pin.

BER at 1 uS: 1.18%

Worst case synch error corresponds to a bit error rate below 2%

without capture effect

.Slide75

Boundary Zone Effect: Duty Cycle ImprovementSlide76

Smoothing Method: Duty Cycle ImprovementSlide77

Boundary Zone Effect: Packet Reception RatioSlide78

Smoothing Method: Packet Reception RatioSlide79

Packet detection rate v. symbol rate + synch errorSlide80

Bit Error Rate v. capture effect @ 125KSlide81

Single-TX comparison: distanceSlide82

Single-TX Comparison: distance

Average: CX is 95% of SP length

Range: -50% to +60%Slide83

Single-TX comparison: size

9.3 X larger forwarder setSlide84

Slave cycleSlide85

Master CycleSlide86

Multitier Software StackSlide87

Validation: DCSlide88
Slide89

Validation: throughputSlide90

How can we get the most out of this?

What factors dictate the selection of a probe interval?

How do probe interval and wakeup frequency affect duty cycle?

When should a node send probes? When shouldn’t it?

How would you adapt LPP to support multiple channels?

90Slide91

Slot length overhead

91Slide92

Tunneled PRR

92

Average: 96.91% (dragged down by 1 bad router)Slide93

93Slide94

Bacon Node

94Slide95

Toast Multiplexer

95Slide96

Mini-Toast

96Slide97

Mote Software

97Slide98

Toast Discovery

98Slide99

99Slide100

100Slide101

101Slide102

102Slide103

103Slide104

104Slide105

105Slide106

Related work: ExOR (Biswa

& Morris, 2005)

Attempts to opportunistically exploit long,

lossy

links

Broadcast, forwarder chosen from pool of receivers

Challenge is to get pool of receivers to agree on which is the “closest” to the destination

Goal is to improve throughput

Related Work

106Slide107

RW: Zigbee

Router nodes may not duty cycle

Star network:

single hop onlyTree network: static routing

Mesh network: AODV, source routing

Related Work

107Slide108

Related Work- Analaysis of multipath routing Part I: The effect on the packet delivery

ratio (

Tsirigos

and Haas, 2004)

Motivated by link instability in wireless ad hoc networks

Split up/encode data with some redundancy

Send encoded chunks on different paths

Distributes number of chunks on each path based on that path’s estimated PRR.

Related Work

108Slide109

RW: Fully Wireless Implementation of Distributed Beamforming

on a Software Defined Radio

Platform (

Rahman, Baidoo-

Williams 2012)

Beamforming

: attempt to phase-align transmissions to maximize constructive interference at receiver

Accomplished using simple software techniques: senders randomly perturb phase of their transmission, get feedback on whether it increased or decreased RSS from receiver

Not applicable to flooding: we want to reach multiple physical locations so we can’t use beam-forming

Would be nice to have some support for this in future radio HW

Related Work

109Slide110

RW: The Capacity of Wireless Networks (Gupta & Kumar, 2000)

Under optimal conditions, throughput for a node is theta(w/

sqrt

(n)), w = speed n = nodes in network

At a high level: throughput diminishes as the nodes in a network increase

Try to keep networks small!

Related Work

110Slide111

RW: Glossy (Ferrari, Zimmerling, Thiele et al)

Efficient Network Flooding and Time Synch with Glossy (IPSN 2011)

The Bus Goes Wireless … (IQ2S 2012)

Also advocates a routing-free communication strategy

Approaches lower bound on flood latency

60 second IPI

Achieves > 99% end-to-end PRR

Duty cycle < 2% on 85-node 3-hop

testbed

Our approach has same number of transmissions for flood, fewer for scoped flood. Outperforming this will come down to duty cycle tuning.

Related Work

111Slide112

(7,4) code: Rate = 0.5 when including 2-bit error detection

implemented with relatively small lookup tables, higher coding rates require code/decode routines

Related Work

112

RW: Hamming Codes

(Source: Wikipedia)Slide113

RW: Trickle (Levis et al, 2004)

“Polite gossip” for disseminating small units of data throughout a network

Nodes adaptively increase the delay between retransmitting a packet when it’s been “in the air” for a while

Exemplifies the difficulty of coordinating floods

Relatively long propagation delays (10s of seconds/minutes)

Related Work

113Slide114

RW: Flash Flooding (Lu, Whitehouse 2009)

Rely on capture effect to survive collisions

Attempts to control number of concurrent transmissions, detect/restart failed floods

Edges of network see poor completion times

Note Naïve Flood completion (X-MAC)

Related Work

114