/
3D Modeling with Depth Sensors 3D Modeling with Depth Sensors

3D Modeling with Depth Sensors - PowerPoint Presentation

alyssa
alyssa . @alyssa
Follow
66 views
Uploaded On 2023-10-04

3D Modeling with Depth Sensors - PPT Presentation

Marc Pollefeys Daniel Barath Spring 202 2 httpwwwcvgethzchteaching3dvision Feb 2 1 Introduction Feb 28 Geometry Camera Model Calibration Mar 7 Features Tracking Matching ID: 1022018

depth time spin point time depth point spin range image stripe triangulation images project stereo points light modeling space

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "3D Modeling with Depth Sensors" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

1. 3D Modeling withDepth SensorsMarc Pollefeys, Daniel BarathSpring 2022http://www.cvg.ethz.ch/teaching/3dvision/

2. Feb 21IntroductionFeb 28Geometry, Camera Model, Calibration Mar 7Features, Tracking / MatchingMar 14Project Proposals by StudentsMar 21Structure from Motion (SfM) + papersMar 28Dense Correspondence (stereo / optical flow) + papersApr 4Bundle Adjustment & SLAM + papersApr 11Multi-View Stereo & Volumetric Modeling + papersApr 18Easter breakApr 25 Student Midterm PresentationsMay 23D Modeling with Depth Sensors + papersMay 9Guest lecture + papersMay 16Guest lecture + papersMay 30Student Project Demo Day = Final PresentationsSchedule

3. PreviouslyObtaining “depth maps” / “range images” via stereo matching (Lecture 5) Volumetric modeling from multiple images and their depth maps (last lecture)

4. TodayActively obtaining “depth maps” / “range images” unstructured light structured light time-of-flight Registering range images for 3D modeling (some slides from Szymon Rusinkiewicz, Brian Curless)

5. Taxonomy3D modelingpassiveactivestereoshape from silhouettes…structured/unstructuredlightlaserscanningphotometricstereo

6. 3D Modeling withDepth Sensors

7. Today’s class Obtaining “depth maps” / “range images”unstructured lightstructured lighttime-of-flight Registering range images(some slides from Szymon Rusinkiewicz, Brian Curless)

8. Unstructured lightproject texture to disambiguate stereo

9. Unstructured LightProject texture to disambiguate stereo Image credits: Thomas Schöps

10. Space-time stereoDavis, Ramamoothi, Rusinkiewicz, CVPR’03

11. Space-time stereoDavis, Ramamoothi, Rusinkiewicz, CVPR’03

12. Space-time stereoZhang, Curless and Seitz, CVPR’03

13. Space-time stereoZhang, Curless and Seitz, CVPR’03

14. Light Transport ConstancyDavis, Yang, Wang, ICCV05

15. Triangulation ScannerLight / LaserCamera“Peak” position in image reveals depth

16. Triangulation: Moving theCamera and IlluminationMoving independently leads to problems with focus, resolutionMost scanners mount camera and light source rigidly, move them as a unit,allows also for (partial) pre-calibration

17. Triangulation: Moving theCamera and Illumination

18. Triangulation: Extending to 3DAlternatives: project dot(s) or stripe(s)ObjectLaserCamera

19. Triangulation Scanner IssuesAccuracy proportional to working volume(typical is ~1000:1)Scales down to small working volume(e.g. 5 cm. working volume, 50 m. accuracy)Does not scale up (baseline too large…)Two-line-of-sight problem (shadowing from either camera or laser)Triangulation angle: non-uniform resolution if too small, shadowing if too big (useful range: 15-30)

20. Triangulation Scanner IssuesMaterial properties (dark, specular)Subsurface scatteringLaser speckleEdge curlTexture embossingWhere is the exact (subpixel) spot position ?

21.

22. Space-time analysisCurless, Levoy, ICCV‘95

23. Space-time analysisCurless, Levoy, ICCV‘95

24. Poor man’s scannerBouguet and Perona, ICCV’98

25. Projector as camera

26. Multi-Stripe TriangulationTo go faster, project multiple stripesBut which stripe is which?Answer #1: assume surface continuitye.g. Eyetronics’ ShapeCam

27. Multi-Stripe TriangulationTo go faster, project multiple stripesBut which stripe is which?Answer #2: colored stripes (or dots)

28. Multi-Stripe TriangulationTo go faster, project multiple stripesBut which stripe is which?Answer #3: time-coded stripes

29. Time-Coded Light PatternsAssign each stripe a unique illumination codeover time [Posdamer 82]SpaceTime

30. Better codes…Gray codeNeighbors only differ one bit

31. KinectInfrared „projector“ Infrared cameraWorks indoors (no IR distraction)„invisible“ for humanDepth Map:note stereo shadows!Color Image(unused for depth)IR Image

32. KinectProjector Pattern „strong texture“Correlation-based stereobetween IR image and projected pattern possiblestereo shadowBad SNR / too closeHomogeneous region, ambiguous without pattern

33. Pulsed Time of FlightBasic idea: send out pulse of light (usually laser), time how long it takes to return

34. Pulsed Time of FlightAdvantages:Large working volume (up to 100 m.)Disadvantages:Not-so-great accuracy (at best ~5 mm.)Requires getting timing to ~30 picosecondsDoes not scale with working volumeOften used for scanning buildings, rooms, archeological sites, etc.

35. Depth cameras2D array of time-of-flight sensors Kinect v2 Azure Kinect

36. Depth cameras2D array of time-of-flight sensorse.g. Canesta’s CMOS 3D sensorjitter too big on single measurement,but averages out on many(10,000 measurements100x improvement)

37. 3D modelingAligning range imagesPairwiseGlobally(some slides from S. Rusinkiewicz, J. Ponce,…)

38. Aligning 3D DataIf correct correspondences are known (from feature matches, colors, …),it is possible to find correct relative rotation/translation

39. Aligning 3D DataXi’ = T XiX1’X2’X2X1For T as general 4x4 matrix:Linear solution from ≥5 corrs.T is Euclidean Transform:3 corrs. (using quaternions) [Horn87] “Closed-form solution of absolute orientation using unit quaternions”Te.g. Kinect motion

40. Aligning 3D DataHow to find corresponding points?Previous systems based on user input,feature matching, surface signatures, etc.

41. Spin Images[Johnson and Hebert ’97]“Signature” that captures local shapeSimilar shapes  similar spin images

42. Computing Spin ImagesStart with a point on a 3D modelFind (averaged) surface normal at that pointDefine coordinate system centered at this point, oriented according to surface normal and two (arbitrary) tangentsExpress other points (within some distance) in terms of the new coordinates

43. Computing Spin ImagesCompute histogram of locations of other points, in new coordinate system, ignoring rotation around normal:“radial dist.”“elevation”

44. Computing Spin Images“radial dist.”“elevation”

45. Spin Image ParametersSize of neighborhoodDetermines whether local or global shapeis capturedBig neighborhood: more discriminative powerSmall neighborhood: resilience to clutterSize of bins in histogram:Big bins: less sensitive to noiseSmall bins: captures more detail

46. Alignment with Spin ImageCompute spin image for each point / subset of points in both setsFind similar spin images => potential correspondencesCompute alignment from correspondencesSame problems as with image matching:Robustness of descriptor vs. discriminative powerMismatches => robust estimation required

47. Aligning 3D Data Alternative: assume closest points correspond to each other, compute the best transform…

48. Aligning 3D Data… and iterate to find alignmentIterated Closest Points (ICP) [Besl & McKay 92]Converges if starting position “close enough“

49. ICP Variant – Point-to-Plane Error MetricUsing point-to-plane distance instead of point-to-point lets flat regions slide along each other more easily [Chen & Medioni 92]

50. Finding Corresponding PointsFinding closest point is most expensive stage of ICPBrute force search – O(n)Spatial data structure (e.g., k-d tree) – O(log n)Voxel grid – O(1), but large constant, slow preprocessing

51. Finding Corresponding PointsFor range images, simply project point [Blais/Levine 95]Constant-time, fastDoes not require precomputing a spatial data structure

52. Next week:Guest Lecture