Humans possess a remarkable ability to actively explore their environments in order to retrieve useful information. For example, we may flip over a book to see its title, push a large box to sense its weight, or dip our toes into a pool to gauge its temperature. In these examples, we use actions (flipping, pushing, dipping) to retrieve relevant information (title, weight, temperature) for planning future actions (reading the book, lifting the box, swimming).
Can our robots learn to do the same? In this project you will develop computer vision and machine learning algorithms that enable robots to actively interact with the environment and use the interactions to improve their understanding of the environments.
Lab: Columbia RoboVision
Direct Supervisor: Shuran Song
Position Dates: 6/1/2020 - 9/1/2020
Hours per Week: 30
Paid Position: Yes
Credit: Yes
Number of positions: 3
Qualifications: Computer vision, robotics
Eligibility: Junior, Senior, Master's (SEAS only)
Shuran Song ([email protected])
Location of lab: CPSER 6LW1