Making Things See (Book Review)

Making Things See: 3D vision with Kinect, Processing, Arduino, and MakerBot” by Greg Borenstein is about the Kinect. The book covers a lot of what has been done with Kinect’s depth sensor, and shows you how you can do it also. It’s a very complete book that covers topics such working with the depth map, with the point cloud, skeleton, but also things such as generating models for 3d printing.

I must confess that reading it made me want to grab the Kinetc and start coding (or running the examples, at least)! The book uses Processing for the examples, which are simple and well explained. It even explains a bit about the main programming structures, so you don’t have to be a professional programmer to run and modify the examples.

Contents

The book starts with an overview of the Kinect’s history and how it works, interviews with well known artists that have used the Kinect in their projects, and how to setup the Processing development environment to program with the Kinect.

It then goes on to practical stuff. Starting with the basics of getting the color image and the depth image from the Kinect and displaying it on your Processing program window. It also shows you how to get the distance of each pixel to the Kinect, in millimetres. It also shows how to filter the depth data, based on the distance of each pixel. With this basic functionality, it shows how to get the nearest object to the Kinext, and using that to create a “write in the air” program, and to control the position of objects in the computer screen using your hands.

The next chapter is about getting and working with the point cloud: displaying it in 3D coordinates in Processing, allowing the user to navigate through the point cloud, and making interactive by creating virtual hot spots to activate program functions. In this chapter you’ll see a Air Drum Kit where several hot spots are assigned to different sounds, and also a project where you import a 3D model of the Kinect itself into your program and position it in the 3D scene, in the position where the real Kinect would be.

In the next chapter, things get a bit more complicated. It deals with the skeleton data, which requires that the SimpleOpenNI library perform some calibration tasks (where users are required to make the Psi pose) to detect the user and it’s limbs positions. The chapter shows step by step how to perform the calibration and how to get the various skeleton joints positions. It also shows how to get information about the positioning of the user’s hands without the calibration step. One of the projects of the chapter shows how to detect if the user is in a pre-determined pose.

The next two chapter were a bit unexpected to me, and a nice surprise. In chapter 5, you’ll see how you can transform the point cloud into a 3D model that can be exported and printed in a 3D printer. The chapter introduces various libraries and tools to make the complete process: using a Processing library – ModelBuilder – to create a 3D model and exporting it to STL; opening the STL in MeshLab – an open source program for working with 3D models – and cleaning it; and then using ReplicatorG to send the model to a 3D printer like MakerBot.

Chapter 6 is about robotics and using Arduino controlled motors to mimic the movements of the user’s arm, and also making the robotic arm follow a point. Finally, chapter 7 talks about other programming tools where the Kinect can be used.

Disclaimer: I got my electronic copy of the book from O’Reilly for free as an instructor’s inspection copy.

comments powered by Disqus