Sometimes, the cure is worse than the disease. When it comes to the dangers of artificial intelligence, badly crafted regulations that give a false sense of accountability can be worse than none at all. This is the dilemma facing New York City, which is poised to become the first city in the country to pass rules on the growing role of AI in employment. More and more, when you apply for a job, ask for a raise, or wait for your work schedule, AI is choosing your fate. Alarmingly, many job applicants never realize that they are being evaluated by a computer, and they have almost no recourse when the software is biased, makes a mistake, or fails to accommodate a disability. While New York City has taken the important step of trying to address the threat of AI bias, the problem is that the rules pending before the City Council are bad, really bad, and we should listen to the activists speaking out before it’s too late. Some advocates are calling for amendments to this legislation , such as expanding definitions of discrimination beyond race and gender, increasing transparency, and covering the use of AI tools in hiring, not just their sale. But many more problems plague the current bill, which is why a ban on the technology is presently preferable to a bill that sounds better than it actually is. Industry advocates for the legislation are cloaking it in the rhetoric of equality, fairness, and nondiscrimination. But the real driving force is money. AI fairness firms and software vendors are poised to make millions for the software that could decide whether you get a job interview or your next promotion. Software firms assure us that they can audit their tools for racism, xenophobia, and inaccessibility. But there’s a catch: None of us know if these audits actually work. Given the complexity and opacity of AI systems, it’s impossible to know what requiring a “bias audit” would mean in practice. As AI rapidly develops, it’s not even clear if audits would work for some types of software. Even worse, the legislation pending in New York leaves the answers to these questions almost entirely in the hands of the software vendors themselves. The result is that the companies that make and evaluate AI software are inching closer to writing the rules of their industry. This means that those who get fired, demoted, or passed over for a job because of biased software could be completely out of luck.
Visit link:
We don’t need weak laws governing AI in hiring—we need a ban