Mittwoch, 15. August 2012

5th OpenCV PSMove Example (Multi-Controller Tracking, World Distance Estimation, Build-Toolchain)


This is the fifth blogpost about the implementation of a tracker for the colored sphere of the PSMove controller. This time it is less about the multiple controller tracking, real world distance estimation, performance issues, and the build-toolchain.

1. Multi Controller Tracking

A lot has happened in the meantime. Most exciting regarding the tracker are its ability of multi controller tracking, increased robustness and the calulation for the distance to the camera.

Have a look at a short video featuring tracking of two controllers.



There are some things that are noticeable in that example. First of all and of course the obvious ... two controllers are tracked at the same time at reasonable speed of 350-900 FPS depending of its distance to the camera. The farer away the controller is from the camera the higher the FPS and vice versa. This is caused by the different levels of regions of interest i use to search for the colored blob in the image. But i already told you that in one of the previous posts [3rd OpenCV Example].

2. Real world distance estimation

The Example now displays also a estimate of the distance between the controller and the camera in [mm]. This might get very handy vor applications that like to make use of the 3D position of the controller. You might ask why to use the distance in [mm] rather than the spheres radius. Using the realworld unit has two advantages. One is, that [mm] is easier to understand as it relates directly to the real world in which the interaction takes place. And the second ist, that the distance in [mm] has 1:1 relation with the user on the Z-Axis, while the readius has a logarithmic relation. To put it simply ... it makes app development easier :P.

3. Increased worst case performance 


One big problem of the old examples was, that if the controller was not visible in the camera image at all, the framerate dropped drastically (around 60FPS on a 2.5Ghz dual core) which was a no go on slower systems e.g. laptops that are in power-saving mode. To increase the FPS, if the controller is not tracked, the tracker simply only scans a quater of the image, and if it does not find the controller, it immediately returns NOT_FOUND. In the next iteration a different quater of the image is evaluated an so on. This is best recogizeable in the video at second 00:17 when i hide the controllers behind my back. 

4. Increased robustness to occlusions

 Another nice feature, is its increased robustness to occlusions. Have a look of the screenshots taken from the video above.

In all of the screenshots you can see, that the circle of the controller is estimated very good, although it is partly occluded. The estimation for the center of the sphere is quite simple and can therefore only correct occlusions smaller than one half of the spheres size. The estimation utilizes the idea, that the two most distant points of a detected contour are equal to the diameter of the sphere and the center of these two points is the the center of the very same. This assumption is true, if the occlusion occurs from one side or from two conterparting sides. For others situations it may fail, but the approach tourned out to be convenient and fast.

5. The new buildprocess and PSMoveAPI integration

Thanks to Thomas Perl the optical tracking of the controller has now been integrated into the branch "tracker" of the official psmoveapi repository hosted on github. It'll soon be moved the master branch but for now just stick with the "tracker" branch. It also contains a detailed description how to build the whole system and the different examples (also the one from the video) that are part of the psmoveapi. As the build-instructions for windows are quite long, they are included at the end of this post :).

6. Remaining problems of the optical tracking

Unfortunately the optical tracking has still some problems.
  1. Only the Magenta, Cyan and Blue can be tracked robustly. Others colors suffer a lot of motion blur which resuls in a low tracking performance. Especially regarding the z-position of the controller.
  2. Strong daylight coming trough windows might be big problem for tracking performance, too. Avoid direct sunlight  by closing curtains if the controllers cannot be tracked.
  3. There are still too much false detections (sphere estimated to small / wrong position) when the controller is partly occluded. 
  4. Artifical light, especial the one emitted of fluorescent light, causes unwanted jitter in the estimation of the 3D position.

enjoy!
   cherio benjamin


System requirements:


---- Build Instructions For Windows -----------
Get and install
- MinGW       : http://sourceforge.net/projects/mingw/files/latest/download?source=files
- CMake       : http://www.cmake.org/cmake/resources/software.html
- OpenCV      : http://sourceforge.net/projects/opencvlibrary/files/opencv-win/
- GIT         : e.g. http://code.google.com/p/msysgit/
- PSEyeDriver : http://codelaboratories.com/get/cl-eye-driver/
[optional]
- CLEyeSDK    : http://codelaboratories.com/get/cl-eye-sdk/

1. build and configure OpenCV with cmake
    :: you may skip to build OpenCV on your own, however i had no luck
    :: the binary distribution did not work on my system
    cd <where you extracted opencv>
    mkdir build
    cd build
    cmake .. -G "MinGW Makefiles"
    mingw32-make
    :: now go for a coffe-break

2. Get you clone of the psmoveapi
    git clone https://github.com/thp/psmoveapi.git
   
3. Check out the "tracker" branch
    cd psmoveapi
    git checkout tracker

4. Init and update the submodules
    git submodule init
    git submodule update
   
4. Copy blue-tooth headers and library to your MinGW installation
    :: e.g. MinGW installed at C:\MinGW\
    :: e.g. your cloned repository is at D:\dev\psmoveapi
   
    copy D:\dev\psmoveapi\external\mingw-w64-headers\*.h  C:\MinGW\include
    copy D:\dev\psmoveapi\external\mingw-w64-headers\*.a  C:\MinGW\lib

5.  make OpenCV known to your system and the cmake toolchain
    set OpenCV_DIR=<the path where you extracted opencv>
    set PATH=%PATH%;%OpenCV_DIR%\build\bin

7. prepare a new build with cmake for the psmoveapi
    ::
    mkdir build
    cd build
    :: only with OpenCV Camera access
    cmake .. -G "MinGW Makefiles"
    :: additionally with Code Laboratories PS Eye SDK
    cmake .. -G "MinGW Makefiles" -DPSMOVE_USE_CL_EYE_SDK=ON
   
8. finally build
    mingw32-make
   
9. start one of the desired test applications
------------------------------------------------

Samstag, 28. Juli 2012

Gyro calibration experiments with a turntable

Last weekend, I've dug out an old turntable to see how well the gyroscope of the Move can be calibrated with the USB-based calibration blob. The turntable has the advantage that it has a known rotation speed (two modes: 33 RPM and 45 RPM), so this can be used to see if the values we get back from one of the gyro axes somehow relates to real-world values.

Before I tried the turntable method, I just played around with the raw Gyro values to see what I can get out of them. I wrote a very simple QGraphicsView-based GUI to see the output visually, and this is what came out of that example:



As you see, that was not really anything to write home about, so next up was the turntable experiment. With that, I could scale the raw gyro readings so that "1.0" (in my case) corresponds to e.g. 45 RPM. Coupling that with an audio player using Qt MultimediaKit, one can translate the turntable movements into playback rate values and control the media player just as if it were a vinyl record:



In this week, I've been working on perfecting the calibration algorithm, cleaning up the API for the calibration part of the library and hooking everything up to Sebastian Madgwick's AHRS algorithm and visualizing the result with Qt3D.

Mittwoch, 18. Juli 2012

hidapi on Linux: Now supporting hidraw enumeration

As I've been posting about previously, I've been working on a hidapi patch to get device enumeration working correctly for Bluetooth HID devices on Linux. After about two months, and thanks to the great support and feedback of Alan Ott (the hidapi maintainer), the patch landed in mainstream hidapi yesterday.

How does this benefit the MoveOnPC project? It now allows us to use the PS Move Motion Controller under Linux via Bluetooth and without having to resort to source-code-level hacks. For most users, this will just be a transparent improvement.

In other news, I've been working with Benjamin yesterday on getting his OpenCV code working on Linux, and while it worked, the LED writing did cause a noticeable pause every 4 seconds. Fixing this by using my experimental "multithreading" branch did help, but we had to increase the delay for the initial calibration blinking. I hope to look into possibilities to improve this for Bluetooth devices on Linux, so that we get the same write performance as on OS X and Windows.

Dienstag, 26. Juni 2012

4th OpenCV PSMove Example (HTML Debug, CL-SDK, INI-parser, Linux RC1)

This is the fourth blogpost about the implementation of a tracker for the colored sphere of the PSMove controller. This time it is less about the tracker, but more about debugging and some other useful stuff.

1. HTML Debug

Since it is quite hard to understand what happens during the calibration process without having a camera-image to observe, a HTML-Trace of the calibration process is now created during runtime. Here are two examples what the HTML-Trace will look like if the calibration fails, or if it succeeds.

The first big 4x4 Table show the "blinks" of the color calibration process, already described in [1st Example Color Calibration]. The following row shows the result of the estimated color, i.e. the final mask that is used to estimate the color, the color the sphere was lit and the estimated color. After that a test is performed with the estimated color on the images in the first column of the 4x4 table, to see whether the color was a good match. Additionally warnings and errors are posted unter "Extended logging information" and finally a live camera image is shown (only if calibration was a success).

2. CL-SDK Integration

On windows the PS-EYE SDK from "Code Laboratories" was integrated and is now used to accquire images (previously via openCV) and to configure camera-settings like exposure, auto-whitebalancen and so on. I'd have preferred to stay with openCV, however the CL-SDK does not offer to access the camera with openCV and the CL-SDK simultaneously nor are the camera-settings applied to the camera permanently. i.e. In order to use the CL-SDK to switch off the auto-exposure, i also have to use the CL-SDK to grab the frames from the camera.

For this reason a new "class" named "camera_control.h" was introduced abstracting access to the camera (configuration, frame grabbing, initialisation) which encapsules v4l2, CL-SDK, openCV in order to provide a single object to access the camera and its configuration for linux and windows.

3. INI-parser

Depending on camera access mode (CL-SDK or v4l2) the camera settings may be permanently changed (even after a restart). Therefore it might be useful to make a backup of the camera-configuration before modifying it and restoring it again on termination.
To store the configuration, and without inventing the wheel the "iniparser" from [ndevilla.free.fr/iniparser] was utilized to easily write and read INI-files. It might also come in handy in future to save lense-distorion parameters.

4. Linux RC1 (binaries available for Linux & Windows)

I am happy to announce, that the demo application now perfectly runs under linux (ubuntu linux 12.04 tested) as well as on Windows 7. If you like to give it a try, probably the easiest way is just to take the binaries from the OpenCVExample.zip file within the zipball [ex4].




- startDemo.bat: click this to quickstart on windows
- startDemo.sh: click this to quickstart on linux (sudo & chmod a+x may be required)
- Debug(xxx): contains the binary for your plattform *
- lib: contains prebuild libraries of psmove-api, openCV, CLEyeMulticam  *
- debug.hml: click this to view the HTML-Trace within your browser (do not remove!)
- debug.js: contains the actual debug data (generated during runtime)
                                                         *: all binaries are either build on Win7x32 or Ubuntu Linux 12.04 x32

enjoy!
   cherio benjamin


System requirements:

Montag, 11. Juni 2012

New labs application: Sensorfilter

If you've been watching the PS Move API repository recently, you might have noticed the new "labs/" subdirectory. In there, I'll push some small utilities that I use for debugging and visualization of the current inner workings of the library. The first tool to be put there is "sensorfilter", which is a quick visualization utility that I wrote for testing the new sensor filtering and calibration APIs. It makes use of both PSMoveFilter and PSMoveCalibration, as well as the original PSMove API. With a properly calibrated controller, you can get good readings (again, I've moved the controller a lot for this screenshot):



The slider at the left controls the current low-pass filter implementation's alpha value (i.e. how quickly should the sensor values converge to the newly-read value). As the sensor filter API is kept modular, it's possible to try to stick other sensor implementations in there without having to change client applications (of course, if there are tweakable settings, the client application has to know about these). With the Sensor Filter utility, it's easy to try out new filters and to sanity-check the calibration code.

The utility is available on github.com/thp/psmoveapi in the "labs/sensorfilter" subdirectory. Have a look at the README file to find out how you can build it. It depends on Qt 4 (tested with 4.8).

Plans for the next few days:

  • Have a look at the OpenCV status, provide feedback to Benjamin
  • Try improved sensor filtering algorithms and compare them
  • Finish the calibration backend code, supporting USB "calibration blob" modes
  • Clean up and document the code, extend the Python and Java bindings

Sensor calibration: Custom method and calibration blob

In the last few days, I've been working on getting a basic sensor data filtering infrastructure set up. In addition to that, I've added support for getting and storing the calibration data that is saved on the controller (the axis naming is a bit different in the PS Move API compared to what you will find on that Wiki page). In addition to the factory-set calibration data, I've also implemented support for a "custom" calibration scheme where the user has to do a 6-point tumble test, which will be used as anchor points for calibration.

The custom calibration scheme works a bit like "mccalibrate" from linmctool, but has (at the moment) a bit simpler algorithm (taking the average over 200 sensor readings). The new calibration tool that I wrote (c/calibrate.c) can detect if you have moved your controller too much while the readings were taken, and will ask you to do the given position again. A custom calibration could look like this (I've moved the controller a lot for the first "buttons up" reading to demo the move detector code):

~S/psmove/psmoveapi% build/calibrate 
Serial number: 04:76:6e:XX:XX:XX
Put the controller in the position 'bulb up' and press the Move button
All readings done for bulb up.
bulb up:
a (avg:     1 |  4359 |   188)
a (dev:    20 |    13 |    43)
m (avg:     2 |    -8 |  -421)
m (dev:     4 |     8 |     5)

Put the controller in the position 'bulb down' and press the Move button
All readings done for bulb down.
bulb down:
a (avg:  -165 | -4379 |  -113)
a (dev:    30 |    20 |    48)
m (avg:   -69 |   287 |  -435)
m (dev:     5 |    10 |     5)

Put the controller in the position 'buttons up' and press the Move button
All readings done for buttons up.
buttons up:
a (avg:   177 |    62 |  4173)
a (dev:  3940 |  2079 |   987)
m (avg:   -34 |    57 |  -250)
m (dev:    22 |    16 |     6)



  DEVIATION TOO HIGH - PLEASE RETRY

Put the controller in the position 'buttons up' and press the Move button
All readings done for buttons up.
buttons up:
a (avg:   -41 |   358 |  4362)
a (dev:    22 |    19 |    19)
m (avg:   -29 |    77 |  -250)
m (dev:     5 |    10 |     5)

Put the controller in the position 'buttons down' and press the Move button
All readings done for buttons down.
buttons down:
a (avg:  -128 |   422 | -4343)
a (dev:    28 |    21 |    25)
m (avg:   -61 |    84 |  -515)
m (dev:     5 |    10 |     7)

Put the controller in the position 'buttons left' and press the Move button
All readings done for buttons left.
buttons left:
a (avg:  4252 |   188 |    63)
a (dev:    38 |    41 |    49)
m (avg:    96 |    76 |  -392)
m (dev:     4 |    13 |     8)

Put the controller in the position 'buttons right' and press the Move button
All readings done for buttons right.
buttons right:
a (avg: -4458 |   338 |   -82)
a (dev:    26 |    24 |    35)
m (avg:  -187 |    85 |  -369)
m (dev:     5 |    13 |     6)

Now that we have done a calibration run, we need a tool to display the results (also, we need a tool that reads the data from USB and stores it): Enter "dump_calibration". This tool will read and persist all calibration blobs of connected USB controllers (the "calibrate" tool will only store custom calibration, and only for Bluetooth controllers). When run with a Bluetooth controller (and again assuming that you have already done the USB fetching part), you can get output like this:

~S/psmove/psmoveapi% build/dump_calibration 
File: /Users/thp/.psmoveapi/04_76_6e_XX_XX_XX.calibration.txt
Flags: 3
Have USB calibration:
10 00 67 07 4f 7f a4 7f c2 90 68 6e 25 80 05 80
60 7f 10 80 bf 6e 75 90 c6 7f c5 7f c1 7f bb 90
33 80 47 7f c7 6e 90 7f d2 08 db 7f 57 80 47 80
d7 07 d2 7f 58 80 4b 80 00 00 00 00 00 00 00 00
00 01 ce 08 e0 01 04 97 53 80 5b 80 e0 01 cc 7f
7b 90 39 80 e0 01 dd 7f 4d 80 64 94 f4 07 d1 d7
12 41 72 fc d0 c0 c9 3e 0d c2 a4 1c 6f 3f a9 90
7b 3f 37 5c 71 3f 02 1d 32 3f 87 69 a1 3d 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
# Orientation #0: ( -177 |   -92 |  4290)
# Orientation #1: (-4504 |    37 |     5)
# Orientation #2: ( -160 |    16 | -4417)
# Orientation #3: ( 4213 |   -58 |   -59)
# Orientation #4: (  -63 |  4283 |    51)
# Orientation #5: ( -185 | -4409 |  -112)
# Gyro X, 80 rpm: ( 5892 |    83 |    91)
# Gyro Y, 80 rpm: (  -52 |  4219 |    57)
# Gyro Z, 80 rpm: (  -35 |    77 |  5220)
# byte at 0x3F: 00

Have custom calibration:
         ax         ay         az         mx         my         mz
#0:       1.27    4359.10     187.74       2.04      -8.18    -421.33 
#1:    -164.57   -4378.57    -112.98     -69.21     286.53    -435.20 
#2:     -41.38     358.21    4361.73     -28.52      77.03    -249.96 
#3:    -127.78     421.94   -4342.93     -61.33      83.87    -514.74 
#4:    4251.81     187.99      62.83      95.57      75.67    -391.71 
#5:   -4458.25     338.40     -81.67    -187.27      85.35    -369.02 

This calibration file can be used by the new PSMoveCalibration API that wraps a PSMove object and provides calibration features on top of it. The function that users will probably use most is psmove_calibration_map() - it takes as input 3, 6 or 9 integer values and converts them into corresponding float values that have been normalized.

With the tumble test ("custom calibration"), we only get values for the accelerometer and magnetometer - for calibrating the gyro, we would need to have access to a turntable and control its speed - something that's not impossible to do, but very hard. Thanks to the research done by other MoveOnPC people, we can extract the information from the USB calibration blob - it stores the expected readings for 80 rotations/minute (according to the wiki page).

You can find the new code on github.com/thp/psmoveapi - expect some rough edges and more updates in the coming days and weeks :)

Donnerstag, 7. Juni 2012

3nd OpenCV PSMove Example (Region Of Interest)

This is the third blogpost about the implementation of a tracker for the colored sphere of the PSMove controller. This time it is about the optimizing the trackers computation time, and some other stuff.

1. Increasing calculation speed

Thanks to Budaházi Viktor from [moveframework] i learned [mailinglist], that 70fps is probably not fast enough, as feature applications using the psmoveapi may already cause a heavy load to the system.

Therefore i introduced a technique called ROI (region of interest) [opencv roi example] in order to reduce calcualtion time. The main idea is, that instead of evaluating the whole picture, just evaluating a region of the picture in which it is very likely to find what we are searching for. This can speed up calculations tremendiously, however the framerate is not constant anymore, as one is downsizeing or extending the ROI during runtime.

So here is what i did: I defined in the aplication to have a arbitrary number of levels of ROIs which will reduce their size by 40% on each level downwards. In the demo i choose to have 5 leves of ROI, so that:
Level 1.                     640x480px (full camera image)
Level 2. 60% of (1) --> 383x383px
Level 3. 60% of (2) --> 229x229px
Level 4. 60% of (3) --> 137x13  px
Level 5. 60% of (4) --> 82x82    px

In the main loop of the tracker i calculated the bounding-box of the sphere found in the current image. Then for the next iteration of the loop, the ROI level is set to one that can hold that bounding box (multiplied by 2). If the sphere was not found at all, i'll go upwards in the hierarchy of ROI levels until the sphere is found again. The center of ROI is always set to the last location where the sphere was found. Future implementations my use movement prediction and additional sensor-data to put the center more into the direction the user is likely to move the controller. This would reduce switchin between the ROI leves for fast movements.

In the top of the video you may now see the framerate, expected sphere color, average luminace, camera exposure and the ROI size. The white square in the image denotes size and location of the current ROI. Note how the framerate increases when i go further away from the camera. With the help of ROI i now get framerates up to 1500fps :P.

2. Handling distortions from (flourescent) light sources

I also figured out, that having fluorescent lightsource in a room may cause the calibration to fail and highly influence the accuracy of the tracking. The lamps cause due to there operational mode to have, just like the camera, something similar to a refresh rate. As the camera and the lightsources are not synchronized and don't have the same refresh rate, the video feed seems to flicker, that means there are travelling darker/brigher horizontal/diagonal regions within the camera image. [light flickering]

In one of the previous posts, i explained, that i perform the color calibration with the help of a sequence of difference images. The light flickering causes small but recognizeable differences in these difference images wich are then in turn regarded as the sphere beeing lit/unlit.

Increasing the number of image pairs taken by one already decreased false-detections reasonably. However it is still not enough as there still remain false-detections in all image pairs, and further increasing the number of pairs taken may neither be bearable by the user nor is it clear if it would be beneficial.

I learnd from a colleague about morpholgical operations like [dilation] and [erosion] which were quit helpful to cleanse the image from smaller false-detections.

1) orignial image
2) difference image
3) thresholded image
4) eroded/dilated image

Notice how in the lower left corner the lamp causes a greater white area in the thresholded image (3) and how it is removed by a subsequent erosion and dilation in image (4).

3. Choosing the right camera exposure time

Depending on the current average luminance in the camera image, the camera exposure is choosen appropriately. I found out, that a exposure value smaller than 0x10 causes colors to be very grey-ish and going higher than 0x40 increases motion unsharpness (fast moving controller) and the sphere looks very white-ish due to the long exposure.

In the very beginning of the calibration i therefore starte with exposure 0x10 and go step by step up to exposure 0x40 until i get a average luminace of 25.

The average luminance is defined in my case as:

IplImage* cameraImage;
CvScalar avgColor = cvAvgS(cameraImage,0x0);
float averageLuminance = (avgColor[0] + avgColor[1] + avgColor[2]) / 3;

If the resulting average luminance is about 0x20, i reduce the spheres brightness to 70% and if it is above 0x30 i decrease it to 50%. This assures, that the shperes color does not look too white-ish for longer exposure times.

Well thats it for this time. ...

system requirements:

Montag, 4. Juni 2012

2nd OpenCV PSMove Example (color tracking)


This is the second blogpost about the implementation of a tracker for the colored sphere of the PSMove controller. This time it is about the tracker itself.

Given the hsv-color (hc)of the sphere in the camera-image. We can now try to track the sphere within the video feed.

Here is what i do:
  • convert the color frame to HSV
  • filter the HSV frame with cvInRange(src,min,max,mask) 
    • min  = cvScalar(hc[0] - 5,   hc[1] - 85,  hc.val[2] - 35, 0);
    • max = cvScalar(hc[0] + 5, hc[1] + 85, hc.val[2] + 35, 0); 
    • As you can see, the hue-value is only in a very limited range (+-5), whilst the saturation my vary quite largely (+-85) and the intensity moderately (+-35). Giving saturation so much space to vary, gives better robustness for fast movements.
  • remove small noise from the image with a median filter, cvSmooth(CV_MEDIAN, 5, 5)
  • find all blobs within the binary image with cvFindContours
    • iterate all found contours and remember only the biggest one
  • calculate the center of mass from that biggest contour to approximate the center of the sphere
    •  cvMoments(mask, &mu, 0);
    •  cvPoint(mu.m10 / mu.m00, mu.m01 / mu.m00);






The tracking (just the calculation steps above) run on my computer (Win7, Intel E5200) @ around 70fps. And seems to be quite robust. However, as you can see at the end in the first video, calculating the center of mass may be fast, but it is error-pronte to occlusions!

system requirements:



1st OpenCV PSMove Example (color calibration)

Hi there everybody. This is my first blogpost about the implementation of a tracker for the colored sphere of the PSMove controller. For this time i'll just propose some of the difficulties i have encounterd and how i solved them.

1st Problem (what color has the sphere)

I intended the tracker to work with a color-filter in order to find the glowing sphere in the camera image. Therefore it is important to know the actual color of the sphere in the camera image!

The color may be highly influenced by the current lighting conditions, the cameras sensor and driver-functionality of the camera like auto-whitebalance, auto-exposure and auto-gain. This gets even worse with respect to the fact, that lighting conditions may change over time (switching on/off lights, closing curtains ...).

The main idea to bypass this problem is to take two pictures within a short time, on in which the sphere is off and one in which it is lit. From this pair it'd be easy to compute its difference with cvAbsDiff() in order to find the area in the image where the sphere is located an then extract the color information of the lit sphere with cvAvg() for that area.

However as there may be motion in the picture, either by the user it self or someon/someting else, calculating a single difference-image is not enough. e.g. like it can be seen in the following picture.
difference-image with a lot user-motion (white areas)
In order to diminish unwanted motion in the difference-image, three ore more image-pairs are taken. As it is likely that only the difference of the lit sphere is visible in all the calculated difference-images, the area where the sphere is located can be approximated by combining the difference images. e.g. like it can be seen int the following picture.



Still this procedure may produce unexpected behaviour if the user moves a lot. Therefore i introduced further checks.

For all images where the sphere was lit (of the previous series) do:
  1. filter the image with the color we just approximated with the help of cvInRangeS(). In the resulting binary image perform a search for contours with cvFindContours(). If there is not exactly ONE contour found, discard the color and start with the color-calibration again.
  2. If the area of that contour is too small (e.g. <100px) --> discard color and start again
  3. If the the area of the contour differs to much from image to image --> discard color and start again

These steps show to be quite robust for different lighing conditions.

If you like the check out the code use the tag https://github.com/benniven/psmoveapiplayground/tree/ex1 to get the code from github.

system requirements:

Samstag, 2. Juni 2012

Multi-threaded LED writing

Today I've experimented a bit with using multi-threading in Linux to write LED status updates. On Linux, setting LEDs actually blocks the process - depending on the controller - for 30 ms up to 500 ms in some cases. I'm not yet sure if there is a way to make this faster (in Mac OS X on the same hardware, updating LEDs is no problem, but maybe OS X does not wait for an acknowledgement when sending out the set LEDs packet).

You can find the multi-threading branch in the multithreaded branch on Github. Now that we have the nice test_read_performance tool, we can compare the results with the previous results from the OS X installation (same hardware):

 -- PS Move API Sensor Reading Performance Test --

Testing STATIC READ performance (non-changing LED setting)
1000 reads in 13043 ms = 76.669478 reads/sec (144x seq jump = 14.40 %)
1000 reads in 12563 ms = 79.598822 reads/sec (100x seq jump = 10.00 %)
1000 reads in 12451 ms = 80.314834 reads/sec (88x seq jump = 8.80 %)
=====
Mean over 3 rounds: 78.861043 reads/sec

Testing SMART READ performance (rate-limited LED setting)
1000 reads in 20073 ms = 49.818164 reads/sec (177x seq jump = 17.70 %)
1000 reads in 20113 ms = 49.719087 reads/sec (196x seq jump = 19.60 %)
1000 reads in 19858 ms = 50.357539 reads/sec (190x seq jump = 19.00 %)
=====
Mean over 3 rounds: 49.964930 reads/sec

Testing BAD READ performance (continous LED setting)
1000 reads in 26187 ms = 38.186887 reads/sec (310x seq jump = 31.00 %)
1000 reads in 25985 ms = 38.483741 reads/sec (306x seq jump = 30.60 %)
1000 reads in 26784 ms = 37.335723 reads/sec (343x seq jump = 34.30 %)
=====
Mean over 3 rounds: 38.002116 reads/sec

Testing RAW READ performance (no LED setting)
1000 reads in 12612 ms = 79.289565 reads/sec (101x seq jump = 10.10 %)
1000 reads in 12639 ms = 79.120184 reads/sec (109x seq jump = 10.90 %)
1000 reads in 12576 ms = 79.516539 reads/sec (107x seq jump = 10.70 %)
=====
Mean over 3 rounds: 79.308762 reads/sec
Still not as good as on OS X, but a good starting point. Without multi-threading on Linux, the results are much worse when the LEDs are updated often:

 -- PS Move API Sensor Reading Performance Test --

Testing STATIC READ performance (non-changing LED setting)
1000 reads in 12890 ms = 77.579519 reads/sec (125x seq jump = 12.50 %)
1000 reads in 12660 ms = 78.988942 reads/sec (104x seq jump = 10.40 %)
1000 reads in 12650 ms = 79.051383 reads/sec (108x seq jump = 10.80 %)
=====
Mean over 3 rounds: 78.539948 reads/sec

Testing SMART READ performance (rate-limited LED setting)
1000 reads in 14596 ms = 68.511921 reads/sec (161x seq jump = 16.10 %)
1000 reads in 14315 ms = 69.856794 reads/sec (143x seq jump = 14.30 %)
1000 reads in 14224 ms = 70.303712 reads/sec (139x seq jump = 13.90 %)
=====
Mean over 3 rounds: 69.557475 reads/sec

Testing BAD READ performance (continous LED setting)
1000 reads in 41132 ms = 24.311971 reads/sec (69x seq jump = 6.90 %)
1000 reads in 41014 ms = 24.381918 reads/sec (70x seq jump = 7.00 %)
1000 reads in 41153 ms = 24.299565 reads/sec (76x seq jump = 7.60 %)
=====
Mean over 3 rounds: 24.331151 reads/sec

Testing RAW READ performance (no LED setting)
1000 reads in 12683 ms = 78.845699 reads/sec (114x seq jump = 11.40 %)
1000 reads in 12723 ms = 78.597815 reads/sec (114x seq jump = 11.40 %)
1000 reads in 12640 ms = 79.113924 reads/sec (107x seq jump = 10.70 %)
=====
Mean over 3 rounds: 78.852478 reads/sec


Interestingly, the multi-threaded variant is worse in the SMART READ test (the rate-limited LED update variant). Not sure why this is the case - maybe it's some threading overhead. For some of my older PS Move controllers that I have here, it's even worse - seems like some controllers are faster to respond than others. I wonder if there's a difference in the firmware or if it's different hardware (the faster controllers is the one that I bought more recently).

Freitag, 1. Juni 2012

Bluetooth on Linux: Fixing hidapi's HID enumeration

One obvious goal of the PS Move API is to be cross-platform. On Linux right now, we can get the controller working via USB with no problems, but Bluetooth devices are not found via hidapi's hid_enumerate() method. One could work around this by opening the /dev/hidraw[0-9]* devices directly, and picking one. However this is kludgy and would require another special-case. The reason why the hid_enumerate() method does not work on Linux is because it always tries to find the USB device for a given hidraw device (even if the hidraw device is a Bluetooth one) and returns the VID/PID from there (this works fine for USB devices, but in case of Bluetooth devices what is returned is the VID/PID of the Bluetooth host adapter, which is not what we are interested in).

As a workaround, I've now implemented a better detection mechanism based on the sysfs path of the hidraw file. For Bluetooth devices, the path could look like this:

/sys/devices/pci0000:00/0000:00:06.0/usb4/4-1/4-1.1/4-1.1:1.0/bluetooth/hci0/hci0:12/0005:054C:03D5.0007/hidraw/hidraw5

For USB devices, it could look like this:

/sys/devices/pci0000:00/0000:00:06.1/usb2/2-2/2-2.1/2-2.1:1.0/0003:054C:03D5.0008/hidraw/hidraw6

In the case of the PS Move Motion Controller, the vendor ID is 0x054c and the product ID is 0x03d5. I've highlighted the occurence of these IDs in boldface above. Given that the hidapi already determines the sysfs path of the hidraw device, it's easy to write a function that extracts the IDs from the path. I've done so now in a patch against hidapi: commit 8ba92edb519 in thp/hidapi.

In order to get it merged into hidapi upstream, I've created pull request #62 at the signal11/hidapi repository on Github. Let's see if I have to rework/improve the patch or if it will be accepted.

After the patch has been merged (you can merge it locally or clone from my hidapi repo), connecting to the PS Move on Linux will become even easier. The only part remaining then is figuring out the permission problems on the hidraw devices (we could probably fix this with some udev rules) and getting Bluez' bluetoothd to reliably accept connections from new PS Move controllers.

Mittwoch, 30. Mai 2012

Profiling sensor reading performance

Right now, the PS Move API is single-threaded. This makes debugging easier, and simplifies the code and usage of it. What this also means is that setting the LED colors will slow down sensor reading (not taking into account any Bluetooth communication slowdown that might happen even in multi-threaded scenarios).

In order to get high-quality sensor readings, the read rate should be as high as possible. On my MacBook Pro in OS X 10.7.4, I currently get an average of 83.49 sensor readings per second if I don't send any LED updates. If I continuously send LED updates (one update after every sensor reading), the reading rate drops to 51.02 sensor readings per second (thats ~ 61% of the best performance).

In order to improve the situation while still being able to set the LEDs (after all, we need the LEDs on for tracking the controller position with the camera) there are two options: First, if the color is static (i.e. it does not change or does not change often), we only have to send an update every 5 seconds (the LEDs keep glowing in the set color for 5 seconds after an update). And even if we want to modify the color during tracking, we can be smart about the updates and ignore some requests if the update rate is too high.

I have implemented both variants in the latest PS Move API code (you can find it in the Git repository) now, and also wrote a small test application (which you can also find in the Git repository as "test_read_performance" when you build the source) to compare the different methods. Here are the results (Git revision 64a3214 on a 2.53 GHz Intel Core 2 Duo, static read = 4000ms between updates, smart read = 120ms between updates):


 -- PS Move API Sensor Reading Performance Test --


Testing STATIC READ performance (non-changing LED setting)
1000 reads in 12373 ms = 80.821143 reads/sec (56x seq jump = 5.60 %)
1000 reads in 11954 ms = 83.654007 reads/sec (29x seq jump = 2.90 %)
1000 reads in 12090 ms = 82.712986 reads/sec (36x seq jump = 3.60 %)
=====
Mean over 3 rounds: 82.396047 reads/sec


Testing SMART READ performance (rate-limited LED setting)
1000 reads in 16386 ms = 61.027707 reads/sec (283x seq jump = 28.30 %)
1000 reads in 16673 ms = 59.977209 reads/sec (304x seq jump = 30.40 %)
1000 reads in 16736 ms = 59.751434 reads/sec (306x seq jump = 30.60 %)
=====
Mean over 3 rounds: 60.252116 reads/sec


Testing BAD READ performance (continous LED setting)
1000 reads in 19593 ms = 51.038636 reads/sec (479x seq jump = 47.90 %)
1000 reads in 19732 ms = 50.679100 reads/sec (486x seq jump = 48.60 %)
1000 reads in 19471 ms = 51.358430 reads/sec (469x seq jump = 46.90 %)
=====
Mean over 3 rounds: 51.025391 reads/sec


Testing RAW READ performance (no LED setting)
1000 reads in 12105 ms = 82.610492 reads/sec (40x seq jump = 4.00 %)
1000 reads in 11987 ms = 83.423709 reads/sec (31x seq jump = 3.10 %)
1000 reads in 11843 ms = 84.438065 reads/sec (21x seq jump = 2.10 %)
=====
Mean over 3 rounds: 83.490753 reads/sec

The ideal situation is RAW READ (not updating the LEDs at all). STATIC READ is when we set the LEDs to a fixed color and only send an update every 4 seconds - the performance impact is not really noticeable in practice. When we try to update the LED colors all the time, but have the new rate-limiting feature enabled (SMART READ), performance drops to 60.25 reads per second (that's 72% of the best performance), but we get an LED update every 120 ms (that's the current threshold for rate limiting and might change in the future). And as discussed above, if we don't do any rate-limiting (BAD READ), the performance is even worse.

The "seq jump" counts are the non-continuous readings of the sequence number that the controller sends with every report. A "seq jump" means that we missed one (or more) sensor readings from the controller. As the sequence number is something internal to the controller, I'm not sure if we can get any better than the 2-4% "seq jumps" in the RAW READ case, but it's good to know that we can nearly read as fast as the controller itself is able to process the data internally.

In practice, even if we do update the LEDs while tracking, we might not update the color every 120 ms and therefore the practical read performance should be between 60.25 and 83.49 reads per second.

Sonntag, 27. Mai 2012

Introduction

Hi. On this blog, we will post about our progress on our two Summer of Code projects that we will work on for the MoveOnPC project:

We are currently experimenting with the code that we have, and are getting everything set up. More details about our progress will be posted here soon (and hopefully regularly). Thomas has written a short introductory mail on our new mailing list, and if you want, you can subscribe to it and join the discussion.