AI trained on fake faces could help fix a big annoyance with mask wearing

Last March, when we all started wearing masks, phone makers suddenly had a big problem. The facial recognition systems used to authenticate users on their phones no longer worked. The AI models that powered them couldn’t recognize users’ faces because they’d been trained using images of only unmasked faces. The unique identifiers they’d been trained to look for were suddenly hidden. Phone makers needed to expand their training data to include a wide assortment of images of masked faces, and quickly. But scraping such images from the web comes with privacy issues, and capturing and labeling high numbers of images is cost- and labor-intensive. Enter Synthesis AI , which has made a business of producing synthetic images of nonexistent people to train AI models. The San Francisco-based startup needed only a couple of weeks to develop a large set of masked faces, with variations in the type and position of the mask on the face. It then delivered them to its phone-maker clients—which the company says include three of the five largest handset makers in the world—via an application programming interface (API). With the new images, the AI models could be trained to rely more on facial features outside the borders of the mask when recognizing users’ faces. [Image: courtesy of Synthesis AI] Phone makers aren’t the only ones facing training data challenges. Developing computer-vision AI models requires a large number of images with attached labels that describe what the image is so that the machine can learn what it is looking at. But sourcing or building huge sets of these labeled images in an ethical way is difficult. For example, controversial startup Clearview AI, which works with law enforcement around the country , claims to have scraped billions of images from social networking sites without consent

More:
AI trained on fake faces could help fix a big annoyance with mask wearing