ioctl.eu

max grosse

2010-08-16 13:34
Hartley and Zisserman describe the homography calculation (or estimation) very well in their wonderful book "Multiple View Geometry in Computer Vision". However, they suggest solving the equation system by using a singular value decomposition (SVD). Another possible solution is to use just a QR decomposition, see e.g. wikipedia for details. QR decomposition is numerically stable and likely even faster than SVD. Therefore, and to learn a bit of raw LAPACK, I've implemented a homography calculation from four points into four points using QR factorization. In case of just four points used, my implementation appears to be more than twice as fast as cvFindHomography (which uses SVD, iirc), though this might be a bit of an unfair comparison (because it's built for more than four points, e.g. by using RANSAC which might add already some overhead).

Nevertheless, the code is here. I've used single precision, but changing that to double precision should be trivial. The error is extremely small, comparable to the one OpenCV does, although this depends on the LAPACK implementation chosen.
find_homography_qr.cpp
2010-06-06 17:18
For an assignment, I had to create a short stop-motion movie. Before starting to search for good software to aid the process of taking something like 600 photos, I decided to write a simple application by myself. Actually, I wanted to use my own cheap webcam for this, and as the default drivers are a mess, I preferred to use my own UVC driver. This is especially a good stress-test to the driver, and I am happy to say that it survived streaming, mode-switching and taking 600 photos without any complains.

Basic working mode is to display a live feed of the current camera image in very low resolution, thus at good frame-rates. Whenever the space-bar is pressed, the camera is switched to 1280x1024 resolution, takes a picture, stores it with an increasing number in the filename and switches back to the low resolution. I've added some neat helpers, like the ability to preview the entire movie without needing to exit the application. Also, I've added (ok, "hacked" would be a more appropriate term) a simple difference-image-overlay to enable better judgment on how the current (live feed frame) differs from the previously captured frame.

For those interested, I will upload the final movie to some video platform someday. For now, I just give you the code of the stop-motion capturing application. It's so basic, it only has 171 lines.

Get it here:
stopmotion.cpp
2010-03-06 21:17
As mentioned in the previous post, I have been working on a userspace driver for UVC cameras under MacOS X.

As the basic functionality for my camera is there, I have decided to release the code to the public. Feel invited to join development now!

I will try to keep the wiki-page updated, instruction on where to find the code are there aswell.

If you like to collaborate on this, please report in the comments to this post. I will set-up an SVN account for you then. Depending on how this is going, I could possibly set-up an own trac as well, but for now I guess this is not necessary.

Also, I would be happy if you could report any experiences - but do not expect that I am able to fix any bugs very soon.

As I restructure everything on this website, the wiki is gone. Feel free to check the code directly from SVN, if interested. I will create a dedicated page again some time later
svn co http://svn.ioctl.eu/mac_uvc/trunk/mac_uvc
2010-02-02 14:10
I now have this nice and cheap webcam, so I can work and try things out even when I am at home. The package said something like "UVC". That's Universal Video Class, a specific protocol atop of USB which many video devices share. It also had a "MacOS X" sticker on it. This is because UVC is nativly supported by MacOS X which I thought was pretty great. So, I plug in the camera and nothing happens. That's just fine. So I launch "Photo Booth", this silly and totally useless application that shipped with MacOS X and indeed, the camera works. But well, it works quite slow. My trained eyes sense something around five frame per second. I thought this might be because the resolution is too high and the exposure too long so that the camera is not able to provide images that fast. Now the problem was: How can you adjust that camera parameters on a mac? Im used to the windows style: Everywhere you use a camera you can open up that familiar DirectShow configuration panel and adjust whatever you like (or, at least, the camera likes you to like). And even if this fails, there is that neat AMCap that comes with some SDK. Usually you can use that one to adjust everything according to your needs. But it turns out, that there is nothing you can do on your Mac.

So, it looks like as one has to mess with UVC directly to get in charge of their camera on their own. So I started understanding all this USB and UVC stuff, and I have to admit that USB is quite a horrible invention. On my journey I've found some interesting work already done by Dominic (link). He provides you with a small Obj-C class where you can switch the auto-exposure, exposure-time and some other important parameters of the camera. Nevertheless, still QuickTime implements the main UVC parts and provides you with images at a rate which QuickTime decides to be good.

That's why I have started writing my own userland UVC driver. To do everything myself and get full control over the camera. To be able to configure the stupidest details. To get frames really fast. That was the idea. Now i've been coding for quite a while in my little and precious spare time to get this working and I am still quite away from a state which I would consider "stable enough to release". But I'll get this done so I'll set up an own page in the wiki to document my progress (and pressure myself not to waste my spare time with cinema or eating out but coding usb!).

While restructuring my website, the original dedicated page for this project is gone.
2009-11-28 20:35
We recently encountered the need to copy OpenGL textures. Do not ask, there are situations, in which copying a texture is the best solution.

My first attempt at this was to use pixel buffer objects, that is, create a sufficient large PBO, copy the texture to it using glGetTexImage, bind a new texture and copy it there using glTexImage2D. This was horribly slow, around 22ms for a 1920x1080 texture on a GeForce 88oo Ultra. I still wonder why.

Next try was to use framebuffer objects, which can be done pretty straight forward:

 1 /* Assume "fbo" is a name of a FBO created using glGenFramebuffersEXT(1, &fbo),
 2  * and width/height are the dimensions of the texture, respectively.
 3  * "tex_src" is the name of the source texture, and
 4  * "tex_dst" is the name of the destination texture, which should have been
 5  * already created */ 
 6 
 7 /// bind the FBO
 8 glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
 9 /// attach the source texture to the fbo
10 glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
11                         GL_TEXTURE_2D, tex_src, 0);
12 /// bind the destination texture
13 glBindTexture(GL_TEXTURE_2D, tex_dst);
14 /// copy from framebuffer (here, the FBO!) to the bound texture
15 glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, width, height);
16 /// unbind the FBO
17 glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);

This works very well and very fast, less than 0.1 ms. However, you need to create a new FBO for every different texture dimensions.

Knowing this, I tried the same approach to copy a texture into a pixel buffer object, which also shows to be quite faster this way:

 1 /// Again, bind fbo and attach source texture
 2 glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
 3 glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
 4         GL_TEXTURE_2D, tex_src, 0);
 5 
 6 /// read pixels into pbo from framebuffer
 7 /// note you obviously need to specify a format and type
 8 glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
 9 glReadPixels(0,0,width,height,format,type,0);
10 
11 /// unbind everything as we are done
12 glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
13 glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);

There's of course no guarantee that these approaches are the fastest for every hardware and set-up. If you have something working faster for you, it would be nice to hear of that!
2009-09-16 20:11
I continued working on my automatic panorama creation software. For now, I tried a completely different approach than bundle adjustment which provides already some interesting results (see image). Still a lot to do, though!

<< newer posts older posts >>