Saturday, April 3, 2010

Activity 5. Shape from Texture Using Spectral Moments

I. Introduction
A solution to compute the shape of curved surfaces from texture information is presented in the paper of Super and Bovik et.al. In their research work, the authors discussed an accurate method for computing the local spatial-frequency moments of a textured image through the use of Gabor functions and their derivatives. The key in this method is to recover the 3D reconstruction through the tilt and slant of the texture image.

II. Methods
In this activity we were tasked to capture a textured 3D object and reconstruct it by utilizing the Gabor functions and the method used by Super and Bovik et.al. in their research. Another way to find an object with repeating patterns, any object may be wrapped by a repeating pattern like net that is readily available.

The figures below are the ones that I used. With the repeating patterns on the object's surface, the object was reconstructed as mentioned above.

Figure 1. The 3D objects for the reconstruction.


III. Results
This activity is very hard for me. This is done by the researchers for quite some time and me as an amateur to Matlab and image processing know certainly that I cannot do this by myself. That's why I'd like to thank my pretty Loren Tusara for helping me a lot in this activity. Thank you.

So here is the result that I got using the pictures above. These are arranged corresponding to the images in Figure 1.


I give myself a grade of 10. :) Thanks to Loren. Ang hirap ng thesis mo friend.

Activity 7. Stereometry

I. Introduction

There are many ways to render the structure and to measure volumes of different 3D objects. One of the commonly utilized way is stereometry.


Stereometry uses two identical cameras, whose lens' centers are separated by some distance b (see Figure 1). The idea is to combine multiple 2D views to get information on the depth of several points on an object, thereby rendering the 3D object. The plane connecting the centers of our lenses is located at a distance z from the object's location, which is what we want to recover. From the point of view of our cameras one and two, this point appears to be located at a transverse distance x1 and x2 with respect to the centers of cameras one and 2. f here indicates the distance of our image plane from the lenses of the camera. This principle is similar to the principle used by our eyes to perceive depths of objects.

Figure 1. Geometry of the setup used.


In our activity, the setup is simplified further. Since we are using only one camera, we just need to displace the camera to a finite transverse distance, where the lens centers of the two cameras are supposed to be placed when using two cameras. From the geometry in Figure 1, we can derive z as follows:
Applying the camera calibration that we have studied in our previous activity, we can reconstruct the 3D structure that we used.


II. Data Gathering


The raw images used are shown below. This is taken from the group of Kaye from their Applied Physics class.


III. 3D Reconstruction

The following is my reconstructed image using Matlab. From a 2D image, it's still not that good.. I'm having problem with the calibration part of the camera used here since I have to calibrate it again before 3D reconstruction.

I give a grade of 7 to myself here since it's still not good. There was an error in the calibration.