Amazon is selling its no-checkout tech to other stores, and we have questions

After two years of running its own cashierless “ Amazon Go ” stores, Amazon now wants other retailers to start using the tech. The “ Just Walk Out ” service, which launched this week, lets retailers equip their stores with cameras, weight sensors, and other technology to detect what people grab from the shelves. Shoppers scan a credit card when they enter the store, and the system automatically bills them for each item when they exit, with an optional kiosk allowing them to enter an email address for receipts. It’s unclear what size of stores Amazon is targeting, but the company says it’s ideal for places where customers are in a rush and have long lines. The company told Reuters that it has “several” unnamed retail customers on board already. If Just Walk Out takes off, it could upend the entire brick-and-mortar retail system even without shifting ever-greater amounts of shopping online . Yet in announcing the new program, Amazon has chosen not to discuss many fundamental issues, such as how it’ll affect jobs and what it will do with all the data it collects. The company declined to answer most questions for this story, instead referring to a brief question-and-answer section on its website . Will Just Walk Out stores accept cash? Although Amazon says it can retrofit existing stores with its tech, the company isn’t saying whether those stores could (or should) continue to accept cash Read More …

Sober curious? There’s an app—in fact, a whole community—for that

When serial entrepreneur MJ Gottlieb, 48, was trying to get sober years ago, he completely avoided drinking establishments. That proved no easy feat when there were at least 14 bars in a two-block radius around his home in New York City and so many friends and colleagues relied on the usual social outings. “There was like nothing else people would come [up with] than ‘let’s grab a drink’ or ‘let’s tailgate,’” says Gottlieb. “Everything seemed to be centered around alcohol.” [Image: Loosid] At the time, Gottlieb ran a strategic consulting firm which specialized in small brands. To unwind, he inevitably wound up in one of two places: coffee shops and diners. Those became his entire social scene. But it got old, quick. Read More …

We keep falling for phishing emails, and Google just revealed why

You should feel cranky about all the phishing emails you get. Because getting your brain in a grumpy gear will elevate the odds of your not getting fooled by the next phony invitation to log into your account. At a briefing Wednesday evening at the Black Hat security conference in Las Vegas, Google security researcher  Elie Bursztein and University of Florida security professor Daniela Oliveira shared that and other insights about the business of coaxing people into giving up their usernames and passwords. The first thing to know about phishing: It’s not as random and sloppy as it might seem. Said Bursztein: “Phishers have constantly refined.” The roughly 100 million phishing emails Google blocks every day fall into three main categories: highly targeted but low-volume spear phishing aimed at distinct individuals, “boutique phishing” that targets only a few dozen people, and automated bulk phishing directed at thousands or hundreds of thousands of people. Those categories differ in duration. Google typically sees boutique campaigns wrap up in seven minutes, while bulk phishing operations average 13 hours. Google also sees most phishing campaigns target its commercial mail service . Bursztein said Google-hosted corporate email accounts were 4.8 times more likely to receive phishing emails than plain old Gmail accounts. Email services were the most commonly impersonated login page in those attempts, at 42%, followed by cloud services (25%), financial institutions (13%), online retail (5%), and delivery services (4%). Bursztein noted that Google still can’t definitely identify many phishing emails—as improbable as that might seem, considering all the data it collects. That explains why Gmail shows an orange box above messages that look somewhat suspicious but aren’t necessarily attacks. This is your brain on phishing attacks The presentation also covered the human factors that make phishing easier. As Oliveira explained, “When we are in a good mood, our deception-detection accuracy tends to decline.” She cited research showing that increased levels of such feeling-good hormones as testosterone and estrogen, oxytocin, serotonin, and dopamine increase people’s risk-taking appetite. But a jump in cortisol levels associated with stress makes us warier. Presumably, the soundtrack for your mail screening should not be Marvin Gaye’s “Let’s Get It On” but the J. Read More …

Seven very simple steps to design more ethical AI

No matter how powerful, all technology is neutral. Electricity can be designed to kill (the electric chair) or save lives (a home on the grid in an inhospitable climate). The same is true for artificial intelligence (AI), which is an enabling layer of technology much like electricity. AI systems have already been designed to help or hurt humans. A group at UCSF recently built an algorithm to save lives through improved suicide prevention , while China has deployed facial recognition AI systems to subjugate ethnic minorities and political dissenters. Therefore, it’s impossible to assign valence to AI broadly. It depends entirely on how it’s designed. To date, that’s been careless. AI blossomed with companies like Google and Facebook, which, in order to give away free stuff, had to find other ways for their AI to make money. They did this by selling ads. Advertising has long been in the business of manipulating human emotions. Big data and AI merely allowed this to be done much more effectively and insidiously than before. AI disasters, such as Facebook’s algorithms being co-opted by foreign political actors to influence elections, could and should have been predicted from this careless use of AI. They have highlighted the need for more careful design, including by AI pioneers like Stuart Russell (often called the father of AI), who now advocates that “standard model AI” should be replaced with beneficial AI . Organizations ranging from the World Economic Forum to Stanford to the New York Times are convening groups of experts to develop design principles for beneficial AI. As a contributor to these initiatives, I believe the following principles are key. Make it easy for users to understand data collection The user must know data is being collected and what it will be used for. Technologists must ensure informed consent on data Read More …

These are the sneaky new ways that Android apps are tracking you

You could admire the tenacity if it didn’t come with such trickery: After years of effort by Google to stop Android apps from scanning users’ data without permission, app developers keep trying to find new work-arounds to track people. A talk at PrivacyCon , a one-day conference hosted by the Federal Trade Commission last Thursday, outlined a few ways apps are prying loose network, device, and location identifiers. Officially, apps generally interact with Android through software hooks known as APIs, giving the operating system the ability to manage their access. “While the Android APIs are protected by the permission system, the file system often is not,” said Serge Egelman , research director of the Usable Security and Privacy Group at the University of California at Berkeley’s International Computer Science Institute. “There are apps that can be denied access to the data, but then they find it in various parts of the file system.” In a paper titled ‘ 50 Ways to Leak Your Data: An Exploration of Apps’ Circumvention of the Android Permissions System ,’ Egelman and fellow researchers Joel Reardon, Álvaro Feal, Primal Wijesekera, Amit Elazari Bar On, and Narseo Vallina-Rodriguez outlined three categories of exploits discovered through an array of tests. One common target, Egelman explained Thursday, is the hard-coded MAC address of a WiFi network—”a pretty good surrogate for location data.” The researchers ran apps on an instrumented version of Android Marshmallow (and, later, on Android Pie). Deep-packet inspection of network traffic found that apps built on such third-party libraries as the OpenX software development kit had been reading MAC addresses from a system cache directory. Other apps exploited system calls or network-discovery protocols to get these addresses more directly. Egelman added that the workings of these apps often made the deception obvious to researchers: “There are many apps that we observed which try to access the data the right way through the Android API, and then, failing that, try and pull it off the file system.” Obtaining a phone’s IMEI (International Mobile Equipment Identity), an identifier unique to each device, can be even more effective for persistent tracking. The researchers discovered that advertising libraries from Salmonads and Baidu would wait for an app containing their code to get permission from the user to read the phone’s IMEI, then copy that identifier to a file on a phone’s SD Card that other apps built on these libraries could read covertly. “This corresponds to about a billion installs of the various apps that are exploiting this technique,” Egelman warned. Finally this team observed the Shutterfly photo-sharing app working around the lack of permission for location data by reading geotags off photos saved on the phone—and then transmitting those coordinates to Shutterfly’s server. Shutterfly communications director Sondra Harding responded in an email on Tuesday, saying the app only reads photos after a user allows access: “There are multiple opportunities in the user experience for granting this permission, including opting in to auto-upload, pulling a local photo into a product creation path, the app settings, etc.” This study and another presented Thursday—’ Panoptispy: Characterizing Audio and Video Exfltration from Android Applications ,’ by Elleen Pan of Northeastern University with Jingjing Ren, Martina Lindorfer, Christo Wilson, and David Choffnes—did not, however, report evidence that Facebook’s apps were exploiting any loopholes to surreptitiously listen to ambient real-world audio. The theory that Facebook or others are doing that keeps coming up despite strenuous, on-the-record denials —and in any case, the current Android Pie release blocks apps from recording audio or video in the background Read More …