Why the Colonial Pipeline ransomware attack is a sign of things to come

Ransomware has grown fouler than ever, but it’s also grown up. The practice of using malware to encrypt files on a victim’s devices and then demanding a ransom payment for unlocking them has advanced far beyond its origins as a nuisance for individual users. These days, it’s a massively profitable business that has spawned its own ecosystem of partner and affiliate firms. And as a succession of security experts made clear at the RSA Conference last week, we remain nowhere near developing an equivalent of a vaccine for this online plague. “It’s professionalized more than it’s ever been,” said Raj Samani, chief scientist at McAfee, in an RSA panel . “Criminals are starting to make more money,” said Jen Miller-Osborn, deputy director of threat intelligence at Palo Alto Networks’ Unit 42, in another session . Read More …

4 wild concepts show what a futuristic, Ikea-designed smart home might look like

Although Ikea is only starting to dabble in smart home tech with light bulbs , speakers , and blinds , the Swedish furniture giant is showing off a vision that’s much more ambitious. Someday, for instance, you might use augmented reality to visualize how your computer and TV share data with one another, or to look up the environmental impact of all of your gadgets. You might even use spatial audio to designate parts of your home as “silent zones,” or enlist a digital avatar to warn you of potential privacy threats. To be clear, Ikea isn’t turning any of these ideas into products anytime soon, but it enlisted its innovation research and design lab, Space10 , and a group of external designers to come up with them as a way to reflect on what the future holds for smart homes. The first of these “ Everyday Experiments ” concepts launched last year, and the latest batch focuses on privacy and trust in an effort to explore what a respectfully designed, noninvasive smart home might look like. “We wanted to go about it where privacy is not a dystopia, and we’re not working at it from a dystopic point of view,” says Tony Gjerlufsen, Space10’s head of technology. “Privacy shouldn’t be a chore either.” [Image: Nicole He and Eran Hilleli/Ikea] The augmented home The clearest example is “Invisible Roommates” by designers Nicole He and Eran Hilleli. Using augmented reality, it envisions smart home devices as cute characters that sit next to their real-world counterparts. When those devices communicate with one another, the AR versions represent the flow of data as a trail of paper planes. [Animation: Nicole He and Eran Hilleli/Ikea] Another experiment by the London-based design studio Field, called “Digital Buddy,” touches on a similar idea. It envisions users talking to a small, blob-like avatar to ask about the privacy policies of other products and services. The AI would then scan through those companies’ terms of service and read out relevant information, such as whether a service can read the content of your private messages. A second concept by Field, called “Chain of Traceability,” imagines that household objects would be registered on a blockchain, which would store information about materials, carbon footprint, and production process. The idea would be for users to scan those products with an AI application so they could make more informed purchasing decisions. [Photo: Field/Ikea] Perhaps the wildest idea of all, though, is Yuri Suzuki’s “Sound Bubbles,” which imagines using an AI app to cordon off silent parts of the home. Users could then steer sound away from those quiet spaces using 3D spatial audio. From concept to reality Since these are design concepts rather than working prototypes, there’s no guarantee that Ikea will realize any of them. Still, some of Space10’s ideas do have roots in existing technology. [Animation: Yuri Suzuki/Ikea] Augmented reality applications, for instance, are widely available today on iOS and Android devices, and Ikea itself has an app that helps people visualize furniture in their homes. AI assistants like Apple’s Siri, Amazon’s Alexa, and Google Assistant are now ubiquitous, as are resources like TOSDR for making terms of service agreements more digestible Read More …

This virtual team-building guide is the cure for Zoom happy hours

As many companies have seen employees working from home for more than a year thanks to the coronavirus pandemic, they’ve been searching for ways beyond conference calls and purely work-focused Zoom meetings for workers to connect online. Naturally, that’s led to an influx of online team-building activities , often replicating the types of activities companies would once engage in for in-person bonding, from wine tastings to virtual escape rooms. But while there’s no shortage of potential Zoom-based social activities for companies to book to entertain employees or clients, it can still be a lot of work for managers to find activities that are right for a particular audience. That was the experience of Healey Cypher, chief operating officer at the venture studio Atomic , who said he found himself spending substantial amounts of time looking for better alternatives to the oft-dreaded Zoom happy hour . The experience led Cypher and his team to experiment with reaching out to vendors offering online experiences and to companies that might be in need of quality group entertainment in order to help pair them together Read More …

Today’s AI isn’t prepared for the messiness of reality

What began as a warning label on financial statements has become useful advice for how to think about almost anything: “Past performance is no guarantee of future results.” So why do so many in the AI field insist on believing the opposite? Too many researchers and practitioners remain stuck on the idea that the data they gathered in the past will produce flawless predictions for future data. If the past data are good, then the outcome will also be good in the future. That line of thinking received a major wake-up call recently when an MIT study found that the 10 most-cited data sets were riddled with label errors (in the training dataset, a picture of a dog is labeled as a cat, for example). These data sets form the foundation of how many AI systems are built and tested, so pervasive errors could mean that AI isn’t as advanced as we may think. After all, if AI can’t tell the difference between a mushroom and a spoon, or between the sound of Ariana Grande hitting a high note and a whistle (as the MIT study found and this MIT Tech Review article denotes), then why should we trust it to make decisions about our health or to drive our cars? The knee-jerk response from academia has been to refocus on cleaning up these benchmark data sets. We can continue to obsess over creating clean data for AI to learn from in a sterile environment, or we can put AI in the real world and watch it grow. Currently, AI is like a mouse raised to thrive in a lab: If it’s let loose into a crowded, polluted city, its chances for surviving are pretty slim. Every AI Will Always Be Wrong Because AI started in academia, it suffers from a fundamental problem of that environment, which is the drive to control how things are tested. This, of course, becomes a problem when academia meets the real world, where conditions are anything but controlled. Tellingly, AI’s relative success in an academic setting has begun to work against it as businesses adopt it Read More …

Today’s AI isn’t prepared for the messiness of reality

What began as a warning label on financial statements has become useful advice for how to think about almost anything: “Past performance is no guarantee of future results.” So why do so many in the AI field insist on believing the opposite? Too many researchers and practitioners remain stuck on the idea that the data they gathered in the past will produce flawless predictions for future data. If the past data are good, then the outcome will also be good in the future. That line of thinking received a major wake-up call recently when an MIT study found that the 10 most-cited data sets were riddled with label errors (in the training dataset, a picture of a dog is labeled as a cat, for example). These data sets form the foundation of how many AI systems are built and tested, so pervasive errors could mean that AI isn’t as advanced as we may think. After all, if AI can’t tell the difference between a mushroom and a spoon, or between the sound of Ariana Grande hitting a high note and a whistle (as the MIT study found and this MIT Tech Review article denotes), then why should we trust it to make decisions about our health or to drive our cars? The knee-jerk response from academia has been to refocus on cleaning up these benchmark data sets Read More …