When broadband monopolies pushed out scrappy local ISPs, we all suffered

Over time, computers have become easier to use and the internet easier to access. It used to be that people needed special training to be able to use software on a machine. Now, small children can do it. Instead of a long, noisy process of connecting through dial-up, our devices can connect to the internet (and each other) instantly, without human intervention or even awareness. Mostly this is a good thing. More intuitive design means getting more people online and bringing more access to powerful tools for self-expression and community. It would be excruciating to try and use sophisticated online tools and platforms using old-school modems and routers. One advantage, though, of older technologies is that they forced us to think about what’s under the hood of the devices we use every day. The clicking, whirring, and beeping of old-school dial-up made it obvious that digital connections don’t just magically appear—they have to be built and maintained. Read More …

AI trained on fake faces could help fix a big annoyance with mask wearing

Last March, when we all started wearing masks, phone makers suddenly had a big problem. The facial recognition systems used to authenticate users on their phones no longer worked. The AI models that powered them couldn’t recognize users’ faces because they’d been trained using images of only unmasked faces. The unique identifiers they’d been trained to look for were suddenly hidden. Phone makers needed to expand their training data to include a wide assortment of images of masked faces, and quickly. But scraping such images from the web comes with privacy issues, and capturing and labeling high numbers of images is cost- and labor-intensive. Enter Synthesis AI , which has made a business of producing synthetic images of nonexistent people to train AI models. The San Francisco-based startup needed only a couple of weeks to develop a large set of masked faces, with variations in the type and position of the mask on the face. It then delivered them to its phone-maker clients—which the company says include three of the five largest handset makers in the world—via an application programming interface (API). With the new images, the AI models could be trained to rely more on facial features outside the borders of the mask when recognizing users’ faces. [Image: courtesy of Synthesis AI] Phone makers aren’t the only ones facing training data challenges. Developing computer-vision AI models requires a large number of images with attached labels that describe what the image is so that the machine can learn what it is looking at. But sourcing or building huge sets of these labeled images in an ethical way is difficult. For example, controversial startup Clearview AI, which works with law enforcement around the country , claims to have scraped billions of images from social networking sites without consent Read More …

PearPop wants to boost your social following by connecting you to TikTok stars for collabs

In the social media ecosystem, there are influencers seeking new revenue streams and aspiring influencers looking to grow their followers. PearPop wants to be the bridge that connects the two. PearPop, which launched last October, is a platform where users pay TikTok influencers to collaborate on content. The influencers set their price for a duet, stitch, or sound (prices range anywhere from $15 to $3,333 per post), and users have the option to pay outright or bid a higher amount if there’s strong demand. In turn, that access to top influencers could boost a growing account. It’s an idea that’s catching on with investors and creators. PearPop recently announced raising $16 million in a Series A led by Alexis Ohanian’s Seven Seven Six, with angel investors including Gary Vaynerchuk, Sean “Diddy” Combs, Mark Cuban, Snoop Dogg, and YouTube star Jimmy Donaldson, aka MrBeast. PearPop currently has more than 10,000 creators on the platform (including such celebrities as Heidi Klum, Snoop Dogg, Shaquille O’Neal, and Kerry Washington) and has facilitated more than 1,000 transactions. (The company takes a 25% cut.) These early collabs have yielded some success stories. Model Leah Svoboda went from 20,000 to 141,000 followers after a PearPop duet with Anna Shumate (10.2 million followers). After musician Tobias Dray collaborated with Katelyn Elizabeth  (1.6 million followers) for $25 using one of his tracks as a sound on TikTok, that song got a bump from being used 30 times to 671. “I always thought there should be a way to pay someone to collaborate with you directly,” says Cole Mason, founder and CEO of PearPop. “It blew my mind that there wasn’t a way to do that.” Making a market Cole Mason [Photo: courtesy of PearPop] It’s easy to compare PearPop to the celebrity shout-out platform Cameo , but PearPop is establishing a distinct lane by creating a two-sided exchange with creators: High-level influencers earn revenue and budding influencers gain social capital Read More …

We don’t need weak laws governing AI in hiring—we need a ban

Sometimes, the cure is worse than the disease. When it comes to the dangers of artificial intelligence, badly crafted regulations that give a false sense of accountability can be worse than none at all. This is the dilemma facing New York City, which is poised to become the first city in the country to pass rules on the growing role of AI in employment. More and more, when you apply for a job, ask for a raise, or wait for your work schedule, AI is choosing your fate. Alarmingly, many job applicants never realize that they are being evaluated by a computer, and they have almost no recourse when the software is biased, makes a mistake, or fails to accommodate a disability. While New York City has taken the important step of trying to address the threat of AI bias, the problem is that the rules pending before the City Council are bad, really bad, and we should listen to the activists speaking out before it’s too late. Some advocates are calling for amendments to this legislation , such as expanding definitions of discrimination beyond race and gender, increasing transparency, and covering the use of AI tools in hiring, not just their sale. But many more problems plague the current bill, which is why a ban on the technology is presently preferable to a bill that sounds better than it actually is. Industry advocates for the legislation are cloaking it in the rhetoric of equality, fairness, and nondiscrimination. But the real driving force is money. AI fairness firms and software vendors are poised to make millions for the software that could decide whether you get a job interview or your next promotion. Software firms assure us that they can audit their tools for racism, xenophobia, and inaccessibility. But there’s a catch: None of us know if these audits actually work. Given the complexity and opacity of AI systems, it’s impossible to know what requiring a “bias audit” would mean in practice. As AI rapidly develops, it’s not even clear if audits would work for some types of software. Even worse, the legislation pending in New York leaves the answers to these questions almost entirely in the hands of the software vendors themselves. The result is that the companies that make and evaluate AI software are inching closer to writing the rules of their industry. This means that those who get fired, demoted, or passed over for a job because of biased software could be completely out of luck. Read More …

Would a ‘SNAP’ program for broadband help bridge the homework divide?

Sarah Kelsey is an instructional coordinator at Greenup County School District in eastern Kentucky. Her school population is mostly rural and internet access is limited. “The COVID-19 pandemic is causing a lot of unequal learning experiences for our students,” Kelsey said. “About 25 percent of our students don’t have access to internet, which is causing a great deal of students to fall behind.” The experience of Greenup County Schools   is unacceptable, but unfortunately all too common. At a time when access to educational opportunities are so critical for long-term success, an estimated 17 million students in unserved and underserved communities lack the connectivity that makes distance learning possible. And new research from the Morning Consult shows that while more than three quarters of parents and teachers are concerned about today’s homework gap, more than 70 percent also expect the traditional classroom learning environment to rely more heavily on technology after the pandemic. If we think this is a problem just for parents, we are wrong. Policymakers and business leaders should also be concerned. The digital divide has been with us for much too long, and now poses a crisis to education that threatens an entire generation of leaders and innovators. The stakes are high, and Sal Khan, founder of the Khan Academy online learning platform, described them to us: “Even when the school districts, the cities and the local telecom carriers have done heroic efforts to get kids internet access, there are still 10-15% of the kids that are disengaged. If we don’t really engage them, we are going to see long-term consequences for economic viability.” (Khan Academy is among our collaborators as we work to bridge the homework gap by providing free services, devices and educational content to schools and communities.) AT&T is helping by providing free hotspots and internet to students around the country. Earlier this year we provided Sarah Kelsey’s district in Greenup, Ky., and more than 100 organizations and schools with free wireless hotspots and connectivity as part of our $10 million Connected Nation commitment. And today AT&T is committing more than $2 billion to deepen our relationships while expanding affordability and subsidies over the next three years to help bridge the digital divide Read More …