Deformable cloths are everywhere, and our current robotics systems are not equipped to handle them. There are two main reasons why this is the case: it’s hard to represent and learn their configuration effectively and it’s difficult to model how actions physically affect the cloth. Luckily, both of these problems are interrelated: a better visual representation of cloth can help generate better actions, and meaningful manipulation may help learn their representation. My advisors have tackled each of these problems separately, and my current research is the next step, which is to combine and extend their work into interactive cloth manipulation. Since November, I have been thinking about metrics of what it means to be “folded” and “unfolded”, and developing a self-supervised robot learning algorithm that trains in a simulation to learn how to unfold cloths into a reasonable configuration. This unfolded state can be used for folding, but the step after that is even more exciting. The visual perception module of the algorithm cannot train on real cloth instances due to the lack of dense labeling. I hope to use my unfolding algorithm to generate this data, making GarmentNets train itself on real-life clothes, solving both cloth interaction problems simultaneously.
Lab: CAIR
Direct Supervisor: Shuran Song
Position Dates: Summer 2022
Hours per Week: 10
Number of positions: 2
Position Type: Hybrid (both Remote and On Site)
Qualifications: Computer Vision, Graphics
Eligibility: Freshman, Sophomore, Junior, Senior
Sharon Song, [email protected]