Difference between revisions of "2010 plans for visual stabilization/servoing"
From Murray Wiki
Jump to navigationJump to search
Line 8: | Line 8: | ||
** one wheeled robot with onboard camera (do we have one around that weighs less than a ton? can we borrow one from Burdick lab?) | ** one wheeled robot with onboard camera (do we have one around that weighs less than a ton? can we borrow one from Burdick lab?) | ||
** (Shuo) It's worthwhile to invest on a good camera at this point (also to save for future experiments). The Point Grey Ladybug seems a good choice: http://www.ptgrey.com/products/ladybug2/index.asp | ** (Shuo) It's worthwhile to invest on a good camera at this point (also to save for future experiments). The Point Grey Ladybug seems a good choice: http://www.ptgrey.com/products/ladybug2/index.asp | ||
** Consider building a separate "calibration platform", such as a pan/tilt/roll unit to do the bootstrapping separatedly from | ** Consider building a separate "calibration platform", such as a pan/tilt/roll unit to do the bootstrapping separatedly from the robotic platform. | ||
* '''Improve and refine the theory'''. There are several things left open: | * '''Improve and refine the theory'''. There are several things left open: | ||
Line 20: | Line 19: | ||
* '''Realistic implementation'''. | * '''Realistic implementation'''. | ||
** So far we worked with n pixels, with small n, and all the computations require n x n matrices. | ** So far we worked with n pixels, with small n, and all the computations require n x n matrices. We should think of better computations (sparse matrices) for big n. | ||
* Do some '''small improvements''' that could enhance the performance a lot (especially with real data I think). For example: | * Do some '''small improvements''' that could enhance the performance a lot (especially with real data I think). For example: |
Latest revision as of 06:42, 7 January 2010
The following are some directions for improving our work on visual stabilization/servoing, remaining in the same context (i.e., not discussing things like bootstrapping, or more complicated problems such as obstacle avoidance, SLAM, etc.).
- First of all: real experiments. At this point we should really work on an experimental platform, for two reasons: 1) we have a nice story and enough theoretical material for a fair journal paper, but we need to test it for real; and 2) Even if we don't want to do a journal paper now, we should start working with something real; sometimes I think we do not have a fair assesment of the relevant phenomena when dealing with real data. I see two options in the short term (not necessarily exclusive):
- helicopter with simulated vision (good for the dynamics, but not enough to qualify as a "real" experiment)
- (Shuo) It's possible to use the helicopter system with an onboard wireless camera. See: http://www.cs.cornell.edu/~asaxena/helicopter/autonomousindoorhelicopter_iros.pdf . This paper uses exactly the same helicopter that I used before. Another good thing is that I've had > 6 months of experience with this model.
- one wheeled robot with onboard camera (do we have one around that weighs less than a ton? can we borrow one from Burdick lab?)
- (Shuo) It's worthwhile to invest on a good camera at this point (also to save for future experiments). The Point Grey Ladybug seems a good choice: http://www.ptgrey.com/products/ladybug2/index.asp
- Consider building a separate "calibration platform", such as a pan/tilt/roll unit to do the bootstrapping separatedly from the robotic platform.
- Improve and refine the theory. There are several things left open:
- Working out exactly when the modified contrast condition is equivalent to the contrast condition. The question of what implies what is not entirely clear.
- Clarify different conditions for convergence in pose space (contrast condition) and in image space (in ambiguous situations); treat them as two different problems.
- Work out differently the case of velocity control and force control.
- Work out the case of camera not at the center of mass.\
- In general, study the problem of bootstrapping on one platform and then change it. Are M, N simple linear combintations of the old ones? (e.g., rotation)
- (Shuo) Try to get rid of some unrealistic assumptions: e.g., "a spherical fly"
- Realistic implementation.
- So far we worked with n pixels, with small n, and all the computations require n x n matrices. We should think of better computations (sparse matrices) for big n.
- Do some small improvements that could enhance the performance a lot (especially with real data I think). For example:
- put some local nonlinearity as a first step, such as using contrast instead of luminance.
- add back some non-bio-plausible elements and see if it makes a difference, such as adding the matrix inverse in computing the least squares estimation.
- Study explicitly different variations of the problem, as in my slide at CDC, let's show how to put visual attitude stabilization, servoing, etc. in the same framework. Such a unification would make the work much more interesting to the community.
- Medium term improvements:
- study how we can integrate this approach, that essentially gives a "good enough" skewed gradient field, with nonoholonomic constraints. What if the skewed gradient field goes against the constraints?
- add some more bio-plausible componenets; for example, let the sensor give spikes instead of a continuous luminance signal, use stochastic units, etc. This is very interesting but it is essentially an orthogonal detour.
- add an observer for mu. I got something very simple working in 2D so far. The people doing visual servoing would be very interested in this, and it would be our chance to study and give our take on structure-from-motion. I'm taking the computer vision course of Perona and Koch and I wanted to do something like this for the course project.
- (Shuo) Explicitly deal with non-ideal factors in pracice: e.g. occlusion, lighting changes, moving objects in the scene...
Experiments plan in detail (towards IROS)
- Test the camera without helicopter.
- Test interferences with helicopter transmitter.
- Calibrate camera: get s_i, and compute M,N.
- Test gradient directions without helicopter (eye-in-hand).
- test region of attraction for different orientations of camera.
- Buy helicopter
- Make helicopter work