Email email@example.com to subscribe to the weekly email alert.
One response to the call by experts in robotics and artificial intelligence for an ban on "killer robots" ("lethal autonomous weapons systems" or Laws in the language of international treaties) is to say: shouldn't you have thought about that sooner? There are shades of science-fictional preconceptions in a 2012 report on killer robots by Human Rights Watch. Besides, there's a continuum between drone war, soldier enhancement technologies and Laws that can't be broken down into "man versus machine". By all means let's try to curb our worst impulses to beat ploughshares into swords, but telling an international arms trade that they can't make killer robots is like telling soft-drinks manufacturers that they can't make orangeade.
Before autonomous trucks and taxis hit the road, manufacturers will need to solve problems far more complex than collision avoidance and navigation (see "10 Breakthrough Technologies 2017: Self-Driving Trucks"). These vehicles will have to anticipate and defend against a full spectrum of malicious attackers wielding both traditional cyberattacks and a new generation of attacks based on so-called adversarial machine learning (see "AI Fight Club Could Help Save Us from a Future of Super-Smart Cyberattacks"). When hackers demonstrated that vehicles on the roads were vulnerable to several specific security threats, automakers responded by recalling and upgrading the firmware of millions of cars. The computer vision and collision avoidance systems under development for autonomous vehicles rely on complex machine-learning algorithms that are not well understood, even by the companies that rely on them (see "The Dark Secret at the Heart of AI").
Hearing plays an essential role in how you navigate the world, and, so far, most autonomous cars can't hear. It recently spent a day testing the system with emergency vehicles from the Chandler, Arizona, police and fire departments. Police cars, ambulances, fire trucks, and even unmarked cop cars chased, passed, and led the Waymo vans through the day and into the night. Sensors aboard the vans recorded vast quantities of data that will help create a database of all the sounds emergency vehicles make, so in the future, Waymo's driverless cars will know how to respond.
When it comes to digital assistants like Amazon's Alexa, my four-year-old niece Hannah Metz is an early adopter. "Alexa, play'It's Raining Tacos,'" she commanded on a recent sunny afternoon, and the voice-controlled helper immediately complied, blasting through its speaker a confection of a song with lines like "It's raining tacos from out of the sky" and "Yum, yum, yum, yum, yumidy yum." These things are most popular among people age 25 to 34, which includes a ton of parents of young children and parents-to-be. Her interest in her digital assistant jibes with some findings in a recent MIT study, where researchers looked at how children ages three to 10 interacted with Alexa, Google Home, a tiny game-playing robot called Cozmo, and a smartphone app called Julie Chatbot.
"So in the long term, the robots can decide when is the best time to start the healing and start heating up." That and they pack well: A four-foot-long soft robot arm can deflate and ship in far less space than a traditional robot arm. But what Terryn's team has shown is that you could theoretically have an injured soft robot deflate itself and heat up to repair the wound. So get ready to see a lot more soft robots and, at some point, soft robots you can stab without getting in trouble.
Henri Waelbroeck, director of research at machine learning trade execution system Portware, says rather poetically that the system "reads the tea leaves" in market data to distinguish different sorts of orders and execute trades more efficiently. This framework enables the deployment of deep learning techniques, essentially processing data through an architecture of agents; each processes the information at their disposal and produces an output which is then consumed by the next agent and so on. Combining these in different ways enables you to create potentially interesting model architectures," he said. Newsweek's AI and Data Science in Capital Markets conference on December 6-7 in New York is the most important gathering of experts in Artificial Intelligence and Machine Learning in trading.
Dino Mehanovic, John Bass, Thomas Courteau, David Rancourt, and Alexis Lussier Desbiens from the University of Sherbrooke realized that perching with a fixed-wing aircraft doesn't need to involve a stall to achieve that vertical and ultra low-speed approach, as long as you can maintain control over the aircraft. We are thinking about various failure causes (unsuitable states during the approach, smooth surface for the microspines) and failure detection timing (before touchdown, at touchdown and after touchdown). You also have to consider numerous factors that are sometime hard to quantify: efficiency of gears, reuse of some components between flight and climbing, transition time, propeller size, operating away from the design point, battery size, etc. Autonomous Thrust-Assisted Perching of a Fixed-Wing UAV on Vertical Surfaces, by Dino Mehanovic, John Bass, Thomas Courteau, David Rancourt, and Alexis Lussier Desbiens from the University of Sherbrooke in Canada, was presented at the 2017 Living Machines Conference at Stanford, where it won a Best Paper award.
If there aren't enough examples of a particular accent or vernacular, then these systems may simply fail to understand you (see "AI's Language Problem"). "If you analyze Twitter for people's opinions on a politician and you're not even considering what African-Americans are saying or young adults are saying, that seems problematic," O'Connor says. Solon Barocas, an assistant professor at Cornell and a cofounder of the event, says the field is growing, with more and more researchers exploring the issue of bias in AI systems. Shared Goel, an assistant professor at Stanford University who studies algorithmic fairness and public policy, says the issue is not always straightforward.
The next level will be using artificial intelligence in election campaigns and political life. This highly sophisticated micro-targeting operation relied on big data and machine learning to influence people's emotions. Typically disguised as ordinary human accounts, bots spread misinformation and contribute to an acrimonious political climate on sites like Twitter and Facebook. For example, if a person is interested in environment policy, an AI targeting tool could be used to help them find out what each party has to say about the environment.