Difference between revisions of "2010 plans for visual stabilization/servoing"

From Murray Wiki
Jump to navigationJump to search
(No difference)

Revision as of 23:35, 28 December 2009

The following are some directions for improving our work on visual stabilization/servoing, remaining in the same context (i.e., not discussing things like bootstrapping, or more complicated problems such as obstacle avoidance, SLAM, etc.).

  • First of all: real experiments. At this point we should really work on an experimental platform, for two reasons: 1) we have a nice story and enough theoretical material for a fair journal paper, but we need to test it for real; and 2) Even if we don't want to do a journal paper now, we should start working with something real; sometimes I think we do not have a fair assesment of the relevant phenomena when dealing with real data. I see two options in the short term (not necessarily exclusive):
    • helycopter with simulated vision (good for the dynamics, but not enough to qualify as a "real" experiment)
    • one wheeled robot with onboard camera (do we have one around that weighs less than a ton? can we borrow one from Burdick lab?)
  • Improve and refine the theory. There are several things left open:
    • Working out exactly when the modified contrast condition is equivalent to the contrast condition. The question of what implies what is not entirely clear.
    • Clarify different conditions for convergence in pose space (contrast condition) and in image space (in ambiguous situations); treat them as two different problems.
  • Do some small improvements that could enhance the performance a lot (especially with real data I think). For example:
    • put some local nonlinearity as a first step, such as using contrast instead of luminance.
    • add back some non-bio-plausible elements and see if it makes a difference, such as adding the matrix inverse in computing the least squares estimation.
  • Study explicitly different variations of the problem, as in my slide at CDC, let's show how to put visual attitude stabilization, servoing, etc. in the same framework. Such a unification would make the work much more interesting to the community.
  • Medium term improvements:
    • study how we can integrate this approach, that essentially gives a "good enough" skewed gradient field, with nonoholonomic constraints. What if the skewed gradient field goes against the constraints?
    • add some more bio-plausible componenets; for example, let the sensor give spikes instead of a continuous luminance signal, use stochastic units, etc. This is very interesting but it is essentially an orthogonal detour.
    • add an observer for mu. I got something very simple working in 2D so far. The people doing visual servoing would be very interested in this, and it would be our chance to study and give our take on structure-from-motion. I'm taking the computer vision course of Perona and Koch and I wanted to do something like this for the course project.