Lecture 21: Image Sensors (2)
aaaldaco2002

I’m actually in the robotics classes rn for 106AB and these are classes that go well together in topics when including cameras and rendering for computer vision. Would love to see more depth in this area!

yykkcc

I'm not sure whether machine learning is involved in this topic. If so, the number of images used for training must be extremely large in order for the robots to learn the features and colors of various objects

rcorona

@yykkcc, yes machine learning was indeed involved. I believe this is the original paper that resulted from this work (https://arxiv.org/abs/1603.02199). In the paper they state that they collected 800,000 different grasp attempts across ~15 or so robots over the course of two months. My understanding is that they then train a model on this data to predict the success probability of candidate grasp motions given an initial image.

sebzhao

Imaging for robotics is super interesting! I found it super cool when working on actual robotics research how sensitive robotics can be to the cameras and small visual differences in things like camera angles or maybe even things like dust. It makes it super interesting to see how graphics actually works and potentially strategies to mitigate these differences when robots are deployed.

agao25

Definitely agree with @sebzhao! I remember working with a color sensor on a robot for a competition once and how pesky it was to finetune our camera and sensor to detect a certain colored ball. Curious what formulates, concepts or algorithms can be taken away from this class and directly applied to other projects/classes like 106AB in terms of finetuning color recognition/imaging. I know the manufacturing industry is heavily reliant on robots, so I wonder how they collect enough appropriate data to train their robots and then confidently deploy them on the line.

JunoLee128

This is really cool. I'm not/wasn't in 106A or B ever, but I saw a group operating their robot arm for a project. It's really interesting how much math goes into defining the movement spaces for the arm joint rotations, etc.

jonnypei

Something kind of random but relevant to robotics/ML research is that it takes a lot of $$$ to do manipulation/grasping experiments due to how expensive these arms are. The only places that can make big leaps in this stuff are places like Google/Meta etc. or super rich labs (e.g. Goldberg, Malik, etc.).

You must be enrolled in the course to comment