I believe that the cost of human-scale and human-safe manipulators is a major impediment to progress in widespread, real-world deployment of robots. Not only are many deployment scenarios out of the question with current state-of-the-art arms (which are often well over $100K), but even research itself is limited to a small number of institutions where manipulators are time-shared between researchers. I am actively working on creating low-cost, human-safe manipulators which aim to be much cheaper than existing art, while still having decent-enough performance to accomplish a variety of tasks envisioned for “personal robots.”
For several years, I spent most of my time on the Stanford AI Robot (STAIR) project, with the long-term goal of developing technology necessary to put a robot in every home and workplace. I became interested in closely-coupled perception and manipulation, and high-resolution depth sensing at distances useful for manipulation and object recognition on human-scale robots. I'm a big believer in improving the quality of sensing and lowering the cost of manipulators, and think that can happen simultaneously.
It takes a lot of software-systems effort to allow closely-coupled perception and manipulation to actually work. I thus became interested in distributed computing, especially as applied to near-realtime processing for large robots such as the STAIR platforms. I have written several software frameworks towards this goal, which have run the STAIR robots over the years and tied them into our computing cluster. This led directly to the Robot Operating System (ROS) with collaborators at Willow Garage. ROS aims to take everything we learned from our respective previous grand-unified-frameworks and create another, awesomer, grand-unified-framework. The ROS documentation is on a wiki generously hosted by Willow Garage. I'm always interested in new ideas to improve and simplify ROS, so please flame away.
ROS was designed from the start to allow multi-site collaborative development, integration of 3rd-party software (e.g. OpenCV, Orocos), multi-language support (C++, Python, Java, LISP, Octave, Lua, others in various brokenness), and efficient peer-to-peer communication between heterogeneous computers and networks. ROS is open-source and commercial-friendly (BSD). Much more information is available here.
It's a huge pain to deal with robotic sensors and actuators. Sorry, but it is. ROS has greatly reduced this pain, since most popular sensors now have ROS device drivers, but some pain remains. Just looking around our lab, we have widgets that talk TTL serial, RS-232, RS-485, RS-422, Firewire/400, Firewire/800, Ethernet (10Mb), Fast Ethernet (100Mb), EtherCat (100Mb), USB 1.1, USB 2.0, and CANbus. These all have their advantages and disadvantages, of course, but I'd much rather deal with a single bus that is OS-neutral and has plenty of bandwidth. Here is more about my current approach.
I started working on low-cost localization techniques for a course project, and have been having fun with it since then. The main idea is to fuse a bunch of cheap sensors (wifi, webcam, magnetometer, accelerometer, etc.) to create a holistic localizer which can leverage the strengths of each sensor to overcome the weaknesses of each individual sensor. More here.
I spend some time playing around with GPS signal processing, which started during efforts on the Stanford Autonomous Helicopter project. In that project, we were trying to push the limits of autonomous (R/C) helicopter aerobatics, and in so doing, spent a lot of time trying to come up with a robust localization solution for small aerobatic vehicles. While researching the feasibility of a differential carrier-phase multi-receiver GPS solution to this problem, we started building our own software and hardware receivers. I built a small FPGA-based multi-antenna datalogging package which can log the raw I/F outputs of several L1 front-ends, and a software receiver (in C) to perform acquisition, code/carrier tracking, and navigation using these datafiles. I am currently experimenting with this setup in the hope that a low-end FPGA can be combined with a high-performance embedded computer to create an open hardware platform that will allow for experimentation at all stages of the GPS processing pipeline. Current work centers using Signals of Opportunity (SOOP) to gracefully handle GPS dropouts using the timing information embedded in digital TV and cellular signals. The more I learn about digital radios, the more shocked I am that they actually work :)
I have built many robots over the years. I need to document them better somewhere. A few years ago, we played around with a few iRobot Creates. We do a ton of laser cutting around here, and have tried many different methods to obtain a clean toolpath from SolidWorks to lasercutter toolpaths. Sadly, they have almost all had terrible terrible flaws. Our latest attempt is to use Inkscape to automatically correct for the laser kerf.