‘No one was in the driver seat’ during Tesla crash that killed two

Authorities in Spring, Texas, say they’re 100% certain that no one was driving during a fatal Tesla crash on Saturday evening. According to KHOU-11 , the 2019 Tesla Model S had two passengers inside, one aged 59 and the other aged 69, when the car went off the road at a slight curve and crashed into a tree, bursting into flames. Harris County Precinct 4 Constable Mark Herman said the fire took four hours and more than 30,000 gallons of water to extinguish, as the car’s batteries continued to reignite the flames. Two men dead after fiery crash in Tesla Model S. “[Investigators] are 100-percent certain that no one was in the driver seat driving that vehicle at the time of impact,” Harris County Precinct 4 Constable Mark Herman said. “They are positive.” #KHOU11 https://t.co/q57qfIXT4f pic.twitter.com/eQMwpSMLt2 — Matt Dougherty (@MattKHOU) April 18, 2021 While authorities haven’t confirmed whether Tesla’s Autopilot feature was engaged at the time of the crash, Herman said it was “almost impossible” for anyone to have been in the driver seat at the time. Tesla did not immediately respond to a request for comment. Read More …

The devastating cost of the Big Tech billionaires’ immense wealth

COVID-19 was a boon for the superrich. There are few better examples than the founders, CEOs, and spouses of the five Big Tech giants: Amazon’s Jeff Bezos and Mackenzie Scott, Microsoft’s Bill Gates, Facebook’s Mark Zuckerberg, Google’s Larry Page and Sergey Brin, and Apple’s Tim Cook and Laurene Powell Jobs. I call them the tech barons. The recently released Forbes World’s Billionaires List includes some shocking figures about our tech overlords. At the start of 2020, the tech barons were collectively worth $419 billion. A year later, their wealth had soared to $651 billion—a 56% increase. The hoarding of that wealth harms us all: It distributes resources away from those who need it most and, by allowing the tech barons to influence government policy, corrodes democratic society. Most of us will never grow our wealth by 56% in a year. But wealth begets wealth Read More …

AI trained on fake faces could help fix a big annoyance with mask wearing

Last March, when we all started wearing masks, phone makers suddenly had a big problem. The facial recognition systems used to authenticate users on their phones no longer worked. The AI models that powered them couldn’t recognize users’ faces because they’d been trained using images of only unmasked faces. The unique identifiers they’d been trained to look for were suddenly hidden. Phone makers needed to expand their training data to include a wide assortment of images of masked faces, and quickly. But scraping such images from the web comes with privacy issues, and capturing and labeling high numbers of images is cost- and labor-intensive. Enter Synthesis AI , which has made a business of producing synthetic images of nonexistent people to train AI models. The San Francisco-based startup needed only a couple of weeks to develop a large set of masked faces, with variations in the type and position of the mask on the face. It then delivered them to its phone-maker clients—which the company says include three of the five largest handset makers in the world—via an application programming interface (API). With the new images, the AI models could be trained to rely more on facial features outside the borders of the mask when recognizing users’ faces. [Image: courtesy of Synthesis AI] Phone makers aren’t the only ones facing training data challenges. Developing computer-vision AI models requires a large number of images with attached labels that describe what the image is so that the machine can learn what it is looking at. But sourcing or building huge sets of these labeled images in an ethical way is difficult. For example, controversial startup Clearview AI, which works with law enforcement around the country , claims to have scraped billions of images from social networking sites without consent Read More …

PearPop wants to boost your social following by connecting you to TikTok stars for collabs

In the social media ecosystem, there are influencers seeking new revenue streams and aspiring influencers looking to grow their followers. PearPop wants to be the bridge that connects the two. PearPop, which launched last October, is a platform where users pay TikTok influencers to collaborate on content. The influencers set their price for a duet, stitch, or sound (prices range anywhere from $15 to $3,333 per post), and users have the option to pay outright or bid a higher amount if there’s strong demand. In turn, that access to top influencers could boost a growing account. It’s an idea that’s catching on with investors and creators. PearPop recently announced raising $16 million in a Series A led by Alexis Ohanian’s Seven Seven Six, with angel investors including Gary Vaynerchuk, Sean “Diddy” Combs, Mark Cuban, Snoop Dogg, and YouTube star Jimmy Donaldson, aka MrBeast. PearPop currently has more than 10,000 creators on the platform (including such celebrities as Heidi Klum, Snoop Dogg, Shaquille O’Neal, and Kerry Washington) and has facilitated more than 1,000 transactions. (The company takes a 25% cut.) These early collabs have yielded some success stories. Model Leah Svoboda went from 20,000 to 141,000 followers after a PearPop duet with Anna Shumate (10.2 million followers). After musician Tobias Dray collaborated with Katelyn Elizabeth  (1.6 million followers) for $25 using one of his tracks as a sound on TikTok, that song got a bump from being used 30 times to 671. “I always thought there should be a way to pay someone to collaborate with you directly,” says Cole Mason, founder and CEO of PearPop. “It blew my mind that there wasn’t a way to do that.” Making a market Cole Mason [Photo: courtesy of PearPop] It’s easy to compare PearPop to the celebrity shout-out platform Cameo , but PearPop is establishing a distinct lane by creating a two-sided exchange with creators: High-level influencers earn revenue and budding influencers gain social capital Read More …

We don’t need weak laws governing AI in hiring—we need a ban

Sometimes, the cure is worse than the disease. When it comes to the dangers of artificial intelligence, badly crafted regulations that give a false sense of accountability can be worse than none at all. This is the dilemma facing New York City, which is poised to become the first city in the country to pass rules on the growing role of AI in employment. More and more, when you apply for a job, ask for a raise, or wait for your work schedule, AI is choosing your fate. Alarmingly, many job applicants never realize that they are being evaluated by a computer, and they have almost no recourse when the software is biased, makes a mistake, or fails to accommodate a disability. While New York City has taken the important step of trying to address the threat of AI bias, the problem is that the rules pending before the City Council are bad, really bad, and we should listen to the activists speaking out before it’s too late. Some advocates are calling for amendments to this legislation , such as expanding definitions of discrimination beyond race and gender, increasing transparency, and covering the use of AI tools in hiring, not just their sale. But many more problems plague the current bill, which is why a ban on the technology is presently preferable to a bill that sounds better than it actually is. Industry advocates for the legislation are cloaking it in the rhetoric of equality, fairness, and nondiscrimination. But the real driving force is money. AI fairness firms and software vendors are poised to make millions for the software that could decide whether you get a job interview or your next promotion. Software firms assure us that they can audit their tools for racism, xenophobia, and inaccessibility. But there’s a catch: None of us know if these audits actually work. Given the complexity and opacity of AI systems, it’s impossible to know what requiring a “bias audit” would mean in practice. As AI rapidly develops, it’s not even clear if audits would work for some types of software. Even worse, the legislation pending in New York leaves the answers to these questions almost entirely in the hands of the software vendors themselves. The result is that the companies that make and evaluate AI software are inching closer to writing the rules of their industry. This means that those who get fired, demoted, or passed over for a job because of biased software could be completely out of luck. Read More …