Seven very simple steps to design more ethical AI

No matter how powerful, all technology is neutral. Electricity can be designed to kill (the electric chair) or save lives (a home on the grid in an inhospitable climate). The same is true for artificial intelligence (AI), which is an enabling layer of technology much like electricity. AI systems have already been designed to help or hurt humans. A group at UCSF recently built an algorithm to save lives through improved suicide prevention , while China has deployed facial recognition AI systems to subjugate ethnic minorities and political dissenters. Therefore, it’s impossible to assign valence to AI broadly. It depends entirely on how it’s designed. To date, that’s been careless. AI blossomed with companies like Google and Facebook, which, in order to give away free stuff, had to find other ways for their AI to make money. They did this by selling ads. Advertising has long been in the business of manipulating human emotions. Big data and AI merely allowed this to be done much more effectively and insidiously than before. AI disasters, such as Facebook’s algorithms being co-opted by foreign political actors to influence elections, could and should have been predicted from this careless use of AI. They have highlighted the need for more careful design, including by AI pioneers like Stuart Russell (often called the father of AI), who now advocates that “standard model AI” should be replaced with beneficial AI . Organizations ranging from the World Economic Forum to Stanford to the New York Times are convening groups of experts to develop design principles for beneficial AI. As a contributor to these initiatives, I believe the following principles are key. Make it easy for users to understand data collection The user must know data is being collected and what it will be used for. Technologists must ensure informed consent on data Read More …

These 5 apps can help you find your next big idea, faster

They say a mind is a terrible thing to waste. Know what else is terrible to waste? Time! So instead of spinning endlessly in your Herman Miller waiting for inspiration to strike, check out these useful tools that can help you generate new ideas in the most expeditious fashion. 1. Set the mood First, we need to get that beautiful mind of yours warmed up Read More …

“Machine teaching” is a thing, and Microsoft wants to own it

Microsoft is rallying behind a new buzzword as it tries to sell businesses on artificial intelligence. It’s called “machine teaching,” and it’s loosely defined by Microsoft as a set of tools that human experts in any field can use to train AI on their own. After steadily developing and acquiring some of these tools, Microsoft is hoping to popularize the concept of machine teaching with a big public push . The hope is that more companies will build their own AI software—running on Microsoft’s cloud computing platform, of course—even if they haven’t hired their own AI experts. “We believe that this is going to be one of the big transformative forces of how AI can be applied to a lot more scenarios and be available to a lot more people in the world,” says Gurdeep Pall, Microsoft’s corporate vice president of business AI. Closing the chasm Microsoft pitches machine teaching as a complement to machine learning, which refers to the way that AI systems analyze data and learn to predict things, like whether a photo contains a human face. With machine teaching, humans guide the system along by breaking a task into individual lessons, akin to how someone learning to play baseball might get coached on tee-ball before graduating to underhand pitches and full-blown fastballs. “Machine learning is all about algorithmically finding patterns in data,” Pall says. “Machine teaching is about the transfer of knowledge from the human expert to the machine learning system.” Microsoft can’t claim sole ownership of the term. Xiaojin (Jerry) Zhu , a professor at University of Wisconsin-Madison, has used “machine teaching” to describe a set of approaches to training machine learning algorithms since 2013, though he and Microsoft both agree there’s some overlap in their definitions. While Microsoft says machine teaching is most conducive to fields like autonomous systems, where the AI has to decide between lots of potential real-world actions, it’s also just a way to make AI more accessible. With the right tools, a subject matter expert should be able to train an AI system without having to understand machine learning, in the same way that a baseball coach doesn’t have to learn brain chemistry. “[Subject matter experts] can basically start using AI largely without understanding a lot about how machine learning itself is working,” Pall says. “And they’re able to basically transfer the knowledge that they have as human experts in a particular area to the AI that needs to run it.” Last year, Microsoft acquired a startup called Bonsai to help abstract away the complexities of AI development. Similar to how Visual Basic is a simpler programming language than C, Bonsai has its own language, called Inkling, which is supposed to be simpler than than low-level AI development. Pall says that with these kinds of tools, industries such as energy, finance, and healthcare can build AI applications without having to hire their own AI experts, who are in high demand and short supply. Mark Hammond , Microsoft general manager for Business AI and former Bonsai CEO, developed a platform that uses machine teaching to help deep reinforcement learning algorithms tackle real-world problems. Read More …