Seven very simple steps to design more ethical AI

No matter how powerful, all technology is neutral. Electricity can be designed to kill (the electric chair) or save lives (a home on the grid in an inhospitable climate). The same is true for artificial intelligence (AI), which is an enabling layer of technology much like electricity. AI systems have already been designed to help or hurt humans. A group at UCSF recently built an algorithm to save lives through improved suicide prevention , while China has deployed facial recognition AI systems to subjugate ethnic minorities and political dissenters. Therefore, it’s impossible to assign valence to AI broadly. It depends entirely on how it’s designed. To date, that’s been careless. AI blossomed with companies like Google and Facebook, which, in order to give away free stuff, had to find other ways for their AI to make money. They did this by selling ads. Advertising has long been in the business of manipulating human emotions. Big data and AI merely allowed this to be done much more effectively and insidiously than before. AI disasters, such as Facebook’s algorithms being co-opted by foreign political actors to influence elections, could and should have been predicted from this careless use of AI. They have highlighted the need for more careful design, including by AI pioneers like Stuart Russell (often called the father of AI), who now advocates that “standard model AI” should be replaced with beneficial AI . Organizations ranging from the World Economic Forum to Stanford to the New York Times are convening groups of experts to develop design principles for beneficial AI. As a contributor to these initiatives, I believe the following principles are key. Make it easy for users to understand data collection The user must know data is being collected and what it will be used for. Technologists must ensure informed consent on data

Read the original post:
Seven very simple steps to design more ethical AI