Augmen ting Inertial Na vigation with ImageBased Motion Estimation Stergios I
102K - views

Augmen ting Inertial Na vigation with ImageBased Motion Estimation Stergios I

Roumeliotis Andrew E Johnson and James F Mon tgomery Abstract Numerous up coming NASA misions need to land safely and precisely on planetary dies Accurate and robust state estimation during the descen phase is necessary o ards this end ha dev elop e

Download Pdf

Augmen ting Inertial Na vigation with ImageBased Motion Estimation Stergios I




Download Pdf - The PPT/PDF document "Augmen ting Inertial Na vigation with Im..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.



Presentation on theme: "Augmen ting Inertial Na vigation with ImageBased Motion Estimation Stergios I"— Presentation transcript:


Page 1
Augmen ting Inertial Na vigation with Image-Based Motion Estimation Stergios I. Roumeliotis, Andrew E. Johnson and James F. Mon tgomery Abstract Numerous up coming NASA misions need to land safely and precisely on planetary dies. Accurate and robust state estimation during the descen phase is necessary o- ards this end, ha dev elop ed new approac for im- pro ed state estimation augmen ting traditional inertial na vigation tec hniques with image-based motion estimation (IBME). Kalman lter that pro cesses rotational elo c- it and linear acceleration measuremen ts pro

vided from an IMU has een enhanced to accomo date relativ ose mea- suremen ts from the IBME. In addition to increased state estimation accuracy IBME con ergence time is reduced while robustness of the erall approac is impro ed. The metho dology is describ ed in detail and exp erimen tal results with 5DOF gan try testb ed are presen ted. In tro duction NASA's roadmap for solar system exploration is lled with missions that require landing on planets, mo ons, comets and asteroids. Eac mission has its wn criteria for suc- cess, but all will require some lev el of autonomous safe and

precise landing capabilit ossibly on hazardous terrain. Previous ork [1 has fo cused on mac hine vision algorithms that, giv en stream of images of planetary dy tak en from single nadir oin ting camera, can pro duce estimates of spacecraft relativ motion, spacecraft dy absolute o- sition and 3-D top ograph of the imaged surface. These estimates in turn can used spacecraft con trol sys- tems to follo precise tra jectories to planetary surfaces and oid hazardous terrain while landing. Our curren researc h, and the fo cus of this pap er, augmen ts inertial na vigation tec hniques [2 with the

earlier dev el- op ed image-based motion estimation (IBME) algorithms [1 ]. The main con tribution of this ork is the dev elopmen of an enhanced Kalman lter (KF) that is able to pro cess elative ose me asur ements from IBME as ell as inertial me asur ements from an Inertial Measuremen Unit (IMU). This general metho dology can extended to an yp of relativ measuremen ts (dierences et een previous and curren states of the system). In order to accommo date relativ ose measuremen ts, the KF has een mo died to duplicate the exact same estimate of the ose of the system S. I.

Roumeliotis is with the California Institute of ec hnology asadena, CA. stergios@rob otics.caltec h.edu. A. E. Johnson and J. F. Mon tgomery are with the Jet Propulsion Lab oratory asadena, CA. aej mon @rob otics.jpl.nasa.go v. and ha these (dieren tly ev olving) estimates in ter- act. The enets of this approac include increased state esti- mation accuracy reduced con ergence time of the IBME algorithm and impro ed robustness of the erall system. Accuracy increases due to additional sensor data for the KF, con ergence time decreases ecause the KF output can used to initialize the IBME

algorithm and robustness impro es since the inertial sensors allo con tin ued trac king of spacecraft state ev en when the IBME fails to follo an adequate um er of features. This approac can used as an enabling tec hnology for future NASA missions that require safe and precise landing capabilit 1.1 Previous ork The ork of Qian et al. [3 is similar to ours in that they fuse oth inertial data and imagery using KF to pro duce motion estimates. Ho ev er, are dieren in ey as- ects. First, com bine oth orien tation (rate gyros) and osition information (accelerometers) with image motion estimates.

Qian et al. only fuse orien tation information (absolute from magnetometers and rate from gyros) with image motion estimates. Second, our metho is mean for an application, safe and precise landing for spacecraft, that is exp ected to ha large shifts in rotation and translation during the spacecraft descen phase. This will drastically eect the eld of view after only few frames, and th us mak no assumption for the reapp earence of features in more than subsequen images. Consequen tly our metho orks trac king features only et een pairs of consecutiv images, and not stream of images as in

[3 ]. Bosse et al. [4 use an Extended Kalman lter to merge IMU, GPS, sonar altimeter, and camera motion estimates in to na vigation solution for an autonomous helicopter. Their approac is applicable to less div erse set of en vi- ronmen ts for reasons. First, they use an optical flo based metho for motion estimation while use corre- lation based metho d. Optical flo is restricted to domains where the motion et een images is exp ected to small, whic is not the case for our application. Second, they mak the assumption that the surface eing imaged is pla- nar, whic ma not the

case during landing on planetary surfaces. Our metho can handle terrain with rugged to- ograph Amidi et al. [5 ha presen ted visual dometer to es- timate the osition of helicopter visually lo king on to and trac king ground ob jects. ttitude information is pro vided set of gyroscop es while osition is estimated
Page 2
based on template matc hing from sequences of stereo vision data. In this approac attitude estimates are not deriv ed from the vision algorithm and it is assumed that the eld of view hanges slo wly while the helicopter ho ers ab the same area. New templates are

acquired only when the previous ones dissap ear from the scene. Approac 2.1 Inertial sensing and Na vigation Inertial na vigation systems (INS) ha een used for decades for estimating the osition and orien tation of manned or autonomous ehicles suc as aircrafts [6 and spacecrafts [7 ]. In recen ears, similar systems ere emplo ed for lo calization of autonomous ground ehicles [9 ]. The core of most INS is an inertial measuremen unit (IMU) comprised of 3-axial accelerometers and gyroscop es. Appropriate in tegration of their signals pro vides estimates of the lo cation of the ehicle. The qualit

of these esti- mates dep ends primarily on the accuracy and noise prole of the IMU. Suc systems can trac ery accurately sud- den motions of short duration but their estimates quic kly deteriorate during longer tra erses due to the noise that con taminates the IMU signals. More sp ecically the in- tegration of the lo frequency noise comp onen (bias) in the accelerometer and gyroscop measuremen ts results in trac king errors that gro un ounded with time. In order to sustain ositioning accuracy inertial na vigation systems usually include additional sensors (suc as compass, incli-

nometers, star trac ers, GPS, deep space net ork radio signal receiv ers, etc.) that pro vide erio dic absolute atti- tude and osition measuremen ts. Ov er the ears ariet of approac hes ha een prop osed for optimally com bining inertial with absolute measuremen ts. Most curren INS emplo some form of Kalman ltering for this purp ose. Previously spacecraft during the en try descen and land- ing (EDL) phase of their trip had to rely solely on IMU signals in order to trac their osition. Lac of absolute osition and/or attitude measuremen ts resulted in large ositioning errors and therefore

left little ro om for na viga- tion to desired destination. In addition, tedious ostpro- cessing of their data as necessary for determining their landing site. In order to impro trac king accuracy during EDL, addi- tional sensing mo dalities that pro vide ositioning up dates are necessary Cameras are ligh eigh t, space-pro en sen- sors that ha een successfully used in the past to pro- vide motion estimates et een consecutiv image frames [1 ]. As aforemen tioned, most INS dep end on the ail- abilit of absolute osition and orien tation information in order to reduce trac king errors and cannot

directly pro cess elative ose measuremen ts unless these are expressed as lo cal elo cit measuremen ts and pro cessed as suc h. As it is discussed in detail in [10 this can problematic, es- ecially in cases where the relativ ose measuremen ts are ailable at lo er rate than the IMU signals. or this rea- son ha dev elop ed arian of 6DOF Kalman lter that is capable of optimally fusing inertial measuremen ts from the IMU with displacemen estimates pro vided vision-based feature trac king algorithm. this oin will defer the deriv ation of this estimator for Section 2.3 and describ

rst the IBME algorithm. 2.2 Visual eature rac king and Na vigation There exist man dieren yp es of algorithms for Image- based Motion Estimation (IBME). or eciency use an algorithm that falls in the category of o-frame feature- based motion estimation [11 ]. obtain complete 6DOF motion estimates, our algorithm is augmen ted altimeter measuremen ts for scale estimation. Belo giv brief erview of our motion estimation algorithm; for more de- tails, please see our previous ork [1 ]. The steps of the algorithm are as follo ws. one time instan descen camera image and laser altimeter

read- ing are obtained. short time later, another image and altimeter reading are recorded. The algorithm then pro- cesses these pairs of measuremen ts to estimate the rigid 6DOF motion et een readings. There are ultiple steps in our algorithm. First, distinct features are selected in the rst image and then trac ed in to the second image. Giv en these feature matc hes, the motion state and co ari- ance of the spacecraft, up to scale on translation, are com- puted. Finally the magnitude of translation is computed com bining altimetry with the motion, and the motion and motion co ariance

are scaled accordingly These mea- suremen ts are then passed to the Kalman lter where they are com bined with inertial measuremen ts to estimate the state of the ehicle. 2.2.1 eature Detection and rac king: Figure 1: rac ed eatures The rst step in o-frame motion estimation is the ex- traction of features from the rst image. eature detection has een studied extensiv ely and ultiple pro en feature detection metho ds exist, so elected to mo dify pro en feature detection metho instead of redesigning our wn. Since pro cessing sp eed is ery imp ortan design con- strain for

our application, selected the ecien feature detection algorithm of Benedetti and erona [12 ]. This
Page 3
algorithm is an implemen tation of the ell kno wn Shi- omasi feature detector and trac er [13 mo died to elimi- nate transcenden tal arithmetic. Although they ultimately implemen ted their algorithm in hardw are on recong- urable computer, ha found that their algorithmic en- hancemen ts also decrease the running time of soft are im- plemen tations. Usually feature detection algorithms exhaustiv ely searc the image for ev ery distinct feature. Ho ev er,

when the goal is motion estimation, only relativ ely small um er of features need to trac ed 100). Consequen tly can sp eed up feature detection using random searc strategy instead of exhaustiv searc while still guaran tee- ing that the required um er of features is obtained. Sup- ose that features are needed for motion estimation. Our detection algorithm selects pixel at random from the image (uniform distribution in ro and column direc- tions). It then computes the sums of in tensit deriv ativ es in neigh orho of the pixel that are used to determine if the pixel is go for trac king. In

tensit deriv ativ es are only computed if they ha not een computed previously If the sums of in tensit deriv ativ es are large enough (see [12 ]) then the pixel is selected as feature for trac king. This pro cedure is rep eated un til features are detected. Next, features are trac ed from the rst frame to the sec- ond. As with feature detection, there exist ultiple meth- ds for feature trac king in the mac hine vision literature. eature trac king can split in to groups of algo- rithms: correlation based metho ds and optical flo based metho ds. Optical flo feature trac ers

are appropriate when the motion et een images is exp ected to small. Since cannot mak this assumption for our application (autonomous EDL for spacecraft), hose standard correlation-based feature trac er [14 ]. This trac er has een made ecien through register arithmetic and sliding windo of sums for correlation computation. An exam- ple of features selected and trac ed et een frames is giv en in Fig. 1. 2.2.2 Motion Estimation: The motion et een camera views is describ ed rigid transformation where the rotation ), rep- resen ted as unit quaternion enco des the rotation e- een views and

enco des the translation et een views. Once features are trac ed et een images, the motion of the camera can estimated solving for the motion parameters that, when applied to the features in the rst image, bring them close to the corresp onding features in the second image. During estimation the motion parameters are concatenated in to single state ector (1) eatures in the rst image are represen ted in unit fo cal co ordinates as oi and in the second image as oi ]. The cost function that is minimized to estimate the motion parameters is (2) where is the pro jection of the

features in the rst image in to the second. is dened as follo ws. Let the 3-D co ordinate of feature at depth then oi (3) The 3-D co ordinates of feature are then (4) and the pro jection of in to the second image is =x =x (5) is minimized using robust Lev en erg-Marquardt nonlinear minimization algorithm, this algorithm requires an initial estimate of the motion parameters and feature depths. In our implemen tation, the feature depths are all initially set to 1. An initial estimate of the motion pa- rameters can obtained using an um er of metho ds including: unaided inertial na

vigation, robust 8-p oin al- gorithm [1 ], or global searc h. or the results generated in this pap er the initial estimate comes from simple global searc that nds the global minim um of searc hing coarse discretization of the rotation space, assuming zero translation, and then searc hing coarse discretization of the translation space using the est rotation. During minimization, robust tec hnique is used to prev en features that are inconsisten with the motion estimate from the previous iteration from corrupting the motion estimate for the curren iteration. During eac iteration of the

minimization, the pro jection residuals of the fea- tures are stored in ector and sorted. If 3( then the corresp onding feature is considered an outlier and is remo ed from the computa- tion of for the curren iteration. After few iterations 10), the estimation con erges to the est estimate of the motion Using this estimate, the Fisher information matrix dL =da dL =da (6) where dL =da 2( dP =da )( )) (7) is computed and passed to the Kalman lter for use in gen- erating the co ariance of fundamen tal shortcoming of all image-based motion es- timation algorithms is the inabilit to solv

for the magni- tude of translational motion so the output of motion estimation is 5DOF motion comp osed of unit ector de- scribing the direction of heading and the rotation et een views. As describ ed in [1 laser altimetry can com bined with the 5DOF motion estimate to compute the complete 6DOF motion of the ehicle. Ho ev er, for the gan try re- sults presen ted in this pap er, the altimetry data as to
Page 4
coarse, so the magnitude of translation as deriv ed from the hange in gan try ositions et een image captures. or eac image pair, the output sen to the Kalman l- ter for

image-based motion estimation is the relativ ose measuremen and its corresp onding co ariance calculated based on the motion state the magnitude of translation and the Fisher information matrix 2.3 usion of Inertial and Relativ Sensor data In this section deriv the equations for the mo died Kalman lter that pro cesses the relativ ose measure- men ts from the IBME algorithm. Since our form ulation is based on sensor mo deling, use the Indirect form of the Kalman lter that estimates the errors in the estimated states instead of the states themselv es. The in terested

reader is referred to [15 16 for detailed description of the adv an tages of the Indirect KF vs. the Direct KF. In what follo ws, assume that at time the ehicle is at osition with (quaternion) attitude and after steps it has mo ed to osition with attitude rames and are the inertial frames of reference at- tac hed to the ehicle at times and corresp ond- ingly 2.3.1 Relativ osition Measuremen Error: The relativ osition measuremen et een the lo- cations and can written as: )( (8) where is the noise asso ciated with this measuremen assumed to zero-mean white Gaussian pro cess with co ariance ]. is

the rotational ma- trix that expresses the orien tation transformation et een frames and If is the error in the estimate of osition and is the error in the estimate of attitude then: Equation (8) can no written as: )( (9) The estimated relativ osition measuremen is: )( (10) The error in the relativ osition measuremen is: By substituting from Eqs. (9), (10) and emplo ying the small error angle appro ximation 1] it can sho wn [10 that: ) ) (11) Note that from here on refers to and refers to ha also dropp ed the ector sym ol from the real, measured, estimated and error osition to simplify

notation. where denotes the cross pro duct matrix of ector In Eq. (11) the rst term expresses the eect of the ori- en tation uncertain at time on the qualit of the esti- mated measuremen t. Note that if at time there as no uncertain ab out the orien tation of the ehicle that ould mean and th us the error in the relativ osition measuremen ould dep end only on the errors in the esti- mates of the previous and curren osition of the ehicle. 2.3.2 Relativ ttitude Measuremen Error: The relativ attitude measuremen error et een the lo cations and is: (12) where is the relativ attitude

measuremen noise. assume that is zero-mean white Gaussian pro cess with co ariance ]. Since and can written as: (13) By substituting Eq. (13) in Eq. (12) ha e: (14) with In order to simplify the notation set: or small attitude estimation errors and mak the follo wing appro ximations: << << The rst term in Eq. (14) can written as: (15) By ultiplying oth sides of Eq. (12) with the matrix (16) dene the ve ctor attitude measuremen error as: ) )( )( (17)
Page 5
By substituting from Eqs. (15), (16) the rst term in the previous equation can written as: )( (18) Eq.

(17) is no expressed as: (19) where ha used the small angle appro ximation and with ( 2.3.3 Relativ ose Measuremen Error: The Indirect Kalman lter estimates the errors in: (i) at- titude (ii) gyroscop es biases (iii) elo cit (iv) accelerometers biases and (v) osition The error state ectors estimated the lter at times and for are: ai The errors in the relativ osition and attitude (p ose) mea- suremen ts calculated in Eqs. (11) and (19) are: (20) with Both noise ectors and are assumed to zero-mean white Gaussian pro cesses with pq As it is eviden from Eq. (21), the

relativ ose measure- men error is expressed in terms of the curren and the previous (error) state of the system. Therefore the Kalman lter state ector has to appropriately augmen ted to con tain oth these state estimates. Note that and are the time in- stan ts when the images pro cessed the IBME ere recorded and th us the relativ ose (motion estimate) mea- suremen pro vided the IBME corresp onds to the time in terv al 2.3.4 Augmen ted-state propagation: If =k is the state estimate at time (when the rst image as recorded) augmen the state ector with second cop of this estimate:

=k =k Since initially at time oth ersions of the estimate of the error state con tain the same amoun of information, the co ariance matrix for the augmen ted system ould e: =k where is the co ariance matrix for the (error) state of the ehicle at time In order to conserv the estimate of the state at necessary for ev aluating the relativ ose measuremen error at during this in terv al only the second cop of the state estimate is propagated while the rst one remains stationary The propagation equation for the augmen ted system based on Eq. (26) is: +1 =k +1 =k +1 or +1 =k +1 =k +1 where is

the non-mo ving cop of the error state of the ehicle. The co ariance of the augmen ted system is propagated according to Eq. (27) and after steps is: m=k m=k (21) where =1 and m=k is the propagated co- ariance of the ev olving state at time 2.3.5 Up date equations: When the relativ ose measuremen is receiv ed the co ari- ance matrix for the residual is giv en Eq. (28): m=k (22) where is the adjusted co ariance for the the relativ ose measuremen and is the initial co ariance of this noise as calculated the IBME algorithm. dene the pseudo-residual co ariance matrix as and substituting

from Eqs. (21), (22): m=k In the deriv ation of the equations of the Kalman lter that pro- cesses relativ ose measuremen ts, duplicated the state estimate and its corresp onding co ariance at time and allo ed eac of them to ev olv separately ha coined the term sto chastic cloning for this new tec hnique. rom the Oxford English Dictionary: [ad. Greek wig, slip], clone, v. propagate or repro duce (an iden tical individual) from giv en original; to replicate (an existing individ- ual).] The discrete time state and system noise propagation matrices +1 +1 are describ ed in detail in [17 16

10 ].
Page 6
where . The up dated co ariance matrix is calculated from Eq. (30) as: m=k m=k m=k m=k (23) m=k m=k m=k The up dated co ariance matrix for the new state of the ehicle will (lo er-righ diagonal submatrix): m=k m=k m=k m=k The Kalman gain is calculated applying Eq. (29): m=k (24) with m=k (25) The residual is calculated as in Eq. (31): )( where are the relativ osition and orien tation mea- suremen ts pro vided the IBME, )( and Th us Finally the up dated augmen ted state is giv en Eq. (32): m=k m=k rom Eq. (25) the (ev olving) state will up dated as: m=k m=k m=k

where ) is the pseudo-measuremen of the relativ displacemen (p ose) expressed in global co ordinates. The quan tities and are computed using the previous and curren state estimates from the lter. Note that the curren state estimates at time are calculated propagating the previous state estimates at time using the rotational elo cit and linear acceleration measuremen ts from the IMU. The same pro cess is rep eated ev ery time new set of relativ ose measuremen ts m m m :: ecomes ail- able. The previous treatmen mak es the assumption that the measuremen ts

m are utually indep enden t, i.e. 0. If the IBME algorithm uses the same set of features from an in termediate image to trac the ose of the ehicle through consecutiv steps then these measuremen ts are lo osely correlated: m +( +1) m +( +1) << m m In this case the correlations ha to explicitly addressed the estimation algorithm. The in terested reader is re- ferred to [18 for detailed treatmen of this case. Exp erimen tal Results 3.1 Gan try description Exp erimen ts ere erformed on 5DOF gan try testb ed at JPL, sho wn in Fig. 2. The gan try pro vides

hard- are in the lo op testb ed for collecting data of sim ulated planetary surface useful for alidating algorithms in con- trolled en vironmen t. It can commanded to mo lin- early at constan elo cit in x, and and can also pan and tilt. The gan try pro vides ground truth linear measure- men ts with 0.35 millimeter resolution and angular mea- suremen ts with 0.01 degree resolution at up to Hz data rate. It can carry pa yloads eighing up to ounds and still pan and tilt while hea vier pa yloads can carried with linear motions only or our exp erimen ts, attac hed an Figure 2: 5DOF Gan try estb ed

electronics pac age con taining sensors, on oard comput- ers, and battery er to the gan try The sensors include Crossb DMU-V GX IMU, do wn ard oin ting laser altimeter and greyscale CCD camera. Only the IMU and CCD camera ere used for these exp erimen ts. The laser
Page 7
10 15 20 400 600 800 1000 1200 x (mm) time (sec) 10 15 20 500 1000 1500 2000 y (mm) time (sec) 10 15 20 −1000 −800 −600 −400 z (mm) time (sec) GANTRY VISION IMU Figure 3: ra jectories estimated in tegration of (i) the rel- ativ ose measuremen ts pro vided the IBME vision-based algorithm, (ii)

the IMU signals, and (iii) the gan try altimeter will in tegrated in to the approac in the future. or our exp erimen ts, only linear motions ere commanded, pan and tilt as held at zero degrees orien tation due to eigh constrain ts as describ ed ab e. All sensor and ground truth gan try data ere gathered while follo wing these tra jectories and then pro cessed and analyzed o-line. 3.2 Preliminary results In the results presen ted here the motion of the ehicle is trac ed after it has een accelerated to sp eed of [42 42 0] mm=sec at sec or the rest of the time the ehicle follo ws straigh line,

almost constan elo cit tra jectory un til time 18 sec when it decelerates to full stop at 19 sec In order to extract the actual dy acceleration during this motion, the lo cal pro jection of the gra vitational acceleration ector has to subtracted from the accelerom- eter signals. Ev en small errors in the attitude estimate can cause signican errors in the calculation of the ac- tual dy accelerations. This is more prev alen during slo motions with small dy accelerations as the ones during this exp erimen t. The estimated elo cities and osi- tions through the in tegration of the IMU are

susceptible to large errors due to the magnitude of the gra vitational ac- celeration compared to the min ute dy accelerations that the ehicle exp eriences during its actual motion. With eing the dominan acceleration in the measured signals, error analysis based on the propagation equations of the IMU in tegrator has sho wn that ev en for small attitude er- rors suc as the errors in osition can gro as 171 mm while the ehicle only mo es as 42 mm This is eviden in Fig. where the osition estimates calculated appropriate in tegration of the IMU signals are alid for only short e- rio of time efore

the errors gro un ounded (e.g. for the 400 500 600 700 800 900 1000 1100 300 400 500 600 700 800 900 1000 x (mm) y (mm) GANTRY VISION KF Figure 4: ra jectories estimated (i) the in tegration of the relativ ose measuremen ts pro vided the IBME vision-based algorithm, (ii) the Kalman lter, and (iii) the gan try (x-y). comp onen of osition the nal error is er 1500 mm ). Note also that during this relativ ely slo motion the in te- gration of the IBME estimates pro vides signican tly etter results with only small errors in tro duced at the egining of the exp erimen (end of

the acceleration phase). By com bining the IBME relativ ose measuremen ts with the IMU signals within the Kalman lter, ositioning er- rors are substan tially reduced. The ehicle tra jectory es- timated the KF follo ws closer the ground truth path (recorded the gan try) when compared to the tra jecto- ries estimated previously either the IMU or the IBME. The osition estimates for and are sho wn in Figs. (4) and (5a) resp ectiv ely The erage (absolute) errors in these estimates ere [4 2] mm for the KF, 17 41 29 9] mm for the IBME and [53 464 126 1] mm for the IMU. The ailabilit of in

termitten (relativ e) ositioning in- formation enables the lter to also up date the estimates of the biases in the accelerometer and gyroscop signals as depicted in Fig. (5b). This in eect reduces the errors in the linear acceleration and rotational elo cit measure- men ts and allo ws the KF estimator to op erate for longer erio ds of time efore an external absolute ose measure- men is necessary Finally should note that since the information from the IBME corresp onds to elative and not absolute ose measuremen ts the lter estimated errors will con tin ue to gro w, alb eit at

slo er rate. This rate is determined the frequency and qualit of the IBME measuremen ts. Summary and uture ork The motiv ation for this ork is to adv ance the state of the art in tec hnologies that enable autonomous safe and precise landing on planetary dies. Ho ev er, the general Kalman lter metho dology describ ed here can
Page 8
10 15 20 −520 −510 −500 −490 −480 −470 −460 z (mm) time (sec) GANTRY VISION KF 10 15 20 −10 −8 −6 −4 −2 x 10 −3 time (sec) z−accelerometer bias (m/sec Figure 5: (a)

ra jectories estimated (i) the in tegration of the relativ ose measuremen ts pro vided the IBME vision-based algorithm, (ii) the Kalman l- ter, and (iii) the gan try (z-t). (b) z-axis accelerom- eter bias estimates. extended to an application that com bines traditional inertial sensors with those that pro vide relativ measure- men ts (dierences et een previous and curren states of the system). ha sho wn that augmen ting inertial na vigation with image-based motion estimation pro vides adv an tages er using eac metho separately increasing state estimation accuracy In addition, the

fusion of IMU mea- suremen ts with motion estimates from the IBME increases the robustness of the Kalman lter estimator due to the fact that these sensing mo dalities ha complimen tary op erating conditions and noise haracteristics. The IMU is capable of trac king sudden motions but drifts fast during longer smo other tra jectories while the IBME is est suited for lo frequency mo emen ts and is imm une to lo frequency drifts. Finally although propagated ose estimates from the KF ere not used to initialize IBME at this stage, exp ect that this will increase the sp eed of con ergence of

the IBME algorithm reducing the size of the searc space. Ac kno wledgmen ts The researc describ ed in this pap er as carried out at the Jet Propulsion Lab oratory California Institute of ec hnology under con tract with the National Aeronautics and Space Ad- ministration. ould lik to thank Ch uc Bergh for his ork on the gan try and the electronics pac age. Indirect Kalman Filter equations Propagation +1 =k +1 =k +1 (26) +1 =k +1 =k +1 +1 +1 (27) Up date +1 =k (28) +1 =k (29) +1 =k +1 +1 =k +1 =k +1 =k (30) +1 +1 +1 +1 (31) +1 =k +1 +1 =k +1 (32) References [1] A. E. Johnson and L. H. Matthies,

\Precise image-based mo- tion estimation for autonomous small dy exploration," in Pr c. 5th Int'l Symp. On rticial Intel ligenc e, ob otics and utomation in Sp ac No ordwijk, The Netherlands, June 1-3 1999, pp. 627{634. [2] S. I. Roumeliotis, G. S. Sukhatme, and G. A. Bek ey \Cir- cum en ting dynamic mo deling: Ev aluation of the error-state Kalman lter applied to mobile rob ot lo calization," in Pr dings of IEEE International Confer enc on ob otics and utomation Detroit, MI, Ma 10-15 1999, ol. 2, pp. 1656{1663. [3] G. Qian, R. Chellappa, Q. Zheng, and J. Ortolf, \Camera mo-

tion estimation using mono cular image sequences and inertial data," ec h. Rep. CS-TR-3997, Univ ersit of Maryland, College ark, MD, 1999. [4] M. Bosse, W.C. Karl, D. Castanon, and DiBitetto, \A vi- sion augmen ted na vigation system," in Pr dings of IEEE Confer- enc on Intel ligent ansp ortation Systems Boston, MA, 9-12 No v. 1997, pp. 1028{33. [5] O. Amidi, T. Kanade, and K. ujita, \A visual dometer for autonomous helicopter fligh t," ob otics and utonomous Systems ol. 28, no. 2-3, pp. 185{193, Aug. 1999. [6] J. A. arrell, T. D. Giv argis, and M. J. Barth, \Real-time dieren tial

carrier phase GPS-aided INS," IEEE ansactions on Contr ol Systems chnolo gy ol. 8, no. 4, pp. 709{721, July 2000. [7] J. R. ertz, Ed., Sp ac cr aft ttitude Determination and Con- tr ol ol. 73 of Astr ophysics and Sp ac Scienc Libr ary D. Reidel Publishing Compan Dordrec t, The Netherlands, 1978. [8] E. J. Leerts, F. L. Markley and M. D. Sh uster, \Kalman ltering for spacecraft attitude estimation," Journal of Guidanc e, Contr ol, and Dynamics ol. 5, no. 5, pp. 417{429, Sept.-Oct. 1982. [9] S. Sukk arieh, E. M. Neb ot, and H. F. Durran t-Wh yte, \A high in tegrit IMU/GPS na vigation lo

op for autonomous land ehicle applications," IEEE ansactions on ob otics and utomation ol. 15, no. 3, pp. 572{578, June 1999. [10] S. I. Roumeliotis, \A Kalman lter for pro cessing 3-D relativ ose measuremen ts," ec h. Rep., Rob otics Lab oratory California Institute of ec hnology Sep. 2001, ttp://rob otics.caltec h.edu/~stergios/tec rep orts/relativ 3d kf.p df. [11] J. eng, N. Ah uja, and T. Huang, \Optimal motion and structure estimation," IEEE Pattern nalysis and Machine Intel li- genc ol. 15, no. 9, pp. 864{884, 1993. [12] A. Benedetti and erona, \Real-time 2-D feature detection on

recongurable computer," in Pr c. IEEE Conf. Computer Vi- sion and Pattern gnition San ta Barbara, CA, 23-25 June 1998, pp. 586{593. [13] J. Shi and C. omasi, \Go features to trac k," in Pr c. IEEE Conf. Computer Vision and Pattern gnition (CVPR'94) Seattle, A, 21-23 June 1994, pp. 593{600. [14] L. Matthies, Dynamic Ster Vision Ph.D. thesis, Sc ho ol of Computer Science, Carnegie Mellon Univ ersit 1989. [15] E. J. Leerts and F. L. Markley \Dynamics mo deling for attitude determination," AIAA Pap er 76-1910 Aug. 1976. [16] Stergios I. Roumeliotis, obust Mobile ob ot aliza- tion: om

single-r ob ot unc ertainties to multi-r ob ot inter dep enden- cies Ph.D. thesis, Electrical Engineering Departmen t, Univ ersit of Southern California, Los Angeles, CA, Ma 2000. [17] B. riedland, \Analysis strap do wn na vigation using quater- nions," IEEE ansactions on er osp ac and Ele ctr onic Systems ol. AES-14, no. 5, pp. 764{768, Sep. 1978. [18] S. I. Roumeliotis, \T reatmen of correlations in rel- ativ ose measuremen ts," ec h. Rep., Rob otics Lab- oratory California Institute of ec hnology Marc 2002, ttp://rob otics.caltec h.edu/~stergios/tec rep orts/tr rpm correl.p df.