For Fei-Fei Li, the first step of learning is by seeing. They able to master it since we were toddlers. Our eyes are perfectly design to capture images every 200 milliseconds without us noticing.
She wants computers to undergo the same process.
Instead of focusing solely on better and better algorithms, my insight was to give the algorithms the kind of training data that a child is given through experiences in both quantity and quality. – Fei-Fei Li
Her first step of teaching the robots to see. She took a heroic task of allowing the machines to recognize thousands images of the real world. They used Amazon’s Mechanical Turk in labeling the million of images. Eventually, they built ImageNet – database that organizes the image files.
Tech giants like Google, Microsoft and Baidu got awesome record for scoring a 5% error rate on ImageNet. To make the long story short, robots get their eyes after several layer of tests.
However, the software was not able to function effectively. Sometimes the technology’s mistakes will identify a toothbrush as a baseball bat. And would forget the epic error Google committed for tagging Black people as gorillas.
Machines for the time being are not able to understand contexts of situations. The visual texts should exhaust all of the possible answers by asking the computer series of questions. If it does not respond properly, then there are more opportunities for AI to develop.
So, starting from the sense of sight, robots become an artificial functioning brain.
Please read the related articles:
You might also like to read: