How tech could help the trans community access health information

“I’m never going to one of these f***ing support groups again.” It was 2014, and I had just come home after two hours of huddling in a church basement with other transgender people like me. I’d recently come out, and I was desperately seeking resources and information in terms of how to move forward. The support group I had attended wasn’t atypical. It consisted of individuals going in a circle and sharing their experiences. On rare occasions, someone would speak about a specific challenge they were going through and shed light on how they were overcoming it—or trying to. On a really good night, someone might share which therapist was least likely to make your life hell as you collected letters deeming you “not crazy,” a draconian but necessary step for many of the legal and medical aspects of transition Read More …

Gen. Charles Q. Brown Jr., America’s first Black Air Force chief, on race, tech, and the trouble with AI

General Charles Q. Brown Jr. became the first Black chief of staff of the Air Force during a perilous moment for the United States. In the time between Brown’s nomination and his unanimous confirmation by the Senate, George Floyd died under the knee of officer Derek Chauvin on the street in Minneapolis. While angry protests and a national reckoning over race unfolded around the country, Brown made the difficult decision to speak out with unusual frankness and depth of feeling for a military leader. “I’m thinking about how my nomination provides some hope but also comes with a heavy burden,” he said in a video addressed to Air Force personnel. “I can’t fix centuries of racism in our country, nor can I fix decades of discrimination that may have impacted members of our Air Force.” [Photo: U.S. Air Force] Brown also entered his role as the U.S. was navigating a rapidly evolving global threat environment. The four-star general spent a good part of his career leading the Air Force’s fight against nonstate terror groups, chiefly ISIS, in Iraq and Afghanistan. But now the U.S. is increasingly threatened by major state actors, mainly a resurgent Russia and emergent China. These new opponents may attack in ways that aren’t necessarily addressable using fighter planes and missiles. It’ll be Brown’s job to oversee the Air Force’s shift in investment away from legacy platforms and toward technologies that will allow the U.S. to compete in the battle theaters of the future. I spoke to the general about these emerging threats, the Air Force’s work with U.S Read More …

The devastating cost of the Big Tech billionaires’ immense wealth

COVID-19 was a boon for the superrich. There are few better examples than the founders, CEOs, and spouses of the five Big Tech giants: Amazon’s Jeff Bezos and Mackenzie Scott, Microsoft’s Bill Gates, Facebook’s Mark Zuckerberg, Google’s Larry Page and Sergey Brin, and Apple’s Tim Cook and Laurene Powell Jobs. I call them the tech barons. The recently released Forbes World’s Billionaires List includes some shocking figures about our tech overlords. At the start of 2020, the tech barons were collectively worth $419 billion. A year later, their wealth had soared to $651 billion—a 56% increase. The hoarding of that wealth harms us all: It distributes resources away from those who need it most and, by allowing the tech barons to influence government policy, corrodes democratic society. Most of us will never grow our wealth by 56% in a year. But wealth begets wealth Read More …

AI trained on fake faces could help fix a big annoyance with mask wearing

Last March, when we all started wearing masks, phone makers suddenly had a big problem. The facial recognition systems used to authenticate users on their phones no longer worked. The AI models that powered them couldn’t recognize users’ faces because they’d been trained using images of only unmasked faces. The unique identifiers they’d been trained to look for were suddenly hidden. Phone makers needed to expand their training data to include a wide assortment of images of masked faces, and quickly. But scraping such images from the web comes with privacy issues, and capturing and labeling high numbers of images is cost- and labor-intensive. Enter Synthesis AI , which has made a business of producing synthetic images of nonexistent people to train AI models. The San Francisco-based startup needed only a couple of weeks to develop a large set of masked faces, with variations in the type and position of the mask on the face. It then delivered them to its phone-maker clients—which the company says include three of the five largest handset makers in the world—via an application programming interface (API). With the new images, the AI models could be trained to rely more on facial features outside the borders of the mask when recognizing users’ faces. [Image: courtesy of Synthesis AI] Phone makers aren’t the only ones facing training data challenges. Developing computer-vision AI models requires a large number of images with attached labels that describe what the image is so that the machine can learn what it is looking at. But sourcing or building huge sets of these labeled images in an ethical way is difficult. For example, controversial startup Clearview AI, which works with law enforcement around the country , claims to have scraped billions of images from social networking sites without consent Read More …

We don’t need weak laws governing AI in hiring—we need a ban

Sometimes, the cure is worse than the disease. When it comes to the dangers of artificial intelligence, badly crafted regulations that give a false sense of accountability can be worse than none at all. This is the dilemma facing New York City, which is poised to become the first city in the country to pass rules on the growing role of AI in employment. More and more, when you apply for a job, ask for a raise, or wait for your work schedule, AI is choosing your fate. Alarmingly, many job applicants never realize that they are being evaluated by a computer, and they have almost no recourse when the software is biased, makes a mistake, or fails to accommodate a disability. While New York City has taken the important step of trying to address the threat of AI bias, the problem is that the rules pending before the City Council are bad, really bad, and we should listen to the activists speaking out before it’s too late. Some advocates are calling for amendments to this legislation , such as expanding definitions of discrimination beyond race and gender, increasing transparency, and covering the use of AI tools in hiring, not just their sale. But many more problems plague the current bill, which is why a ban on the technology is presently preferable to a bill that sounds better than it actually is. Industry advocates for the legislation are cloaking it in the rhetoric of equality, fairness, and nondiscrimination. But the real driving force is money. AI fairness firms and software vendors are poised to make millions for the software that could decide whether you get a job interview or your next promotion. Software firms assure us that they can audit their tools for racism, xenophobia, and inaccessibility. But there’s a catch: None of us know if these audits actually work. Given the complexity and opacity of AI systems, it’s impossible to know what requiring a “bias audit” would mean in practice. As AI rapidly develops, it’s not even clear if audits would work for some types of software. Even worse, the legislation pending in New York leaves the answers to these questions almost entirely in the hands of the software vendors themselves. The result is that the companies that make and evaluate AI software are inching closer to writing the rules of their industry. This means that those who get fired, demoted, or passed over for a job because of biased software could be completely out of luck. Read More …