The Car in the Age of Connectivity: Enabling Car to Cloud Connectivity
Learn how TE Connectivity is transforming technology to enable the connected car
FeedBurner makes it easy to receive content updates in My Yahoo!, Newsgator, Bloglines, and other news readers.
Learn more about syndication and FeedBurner...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
Let us know if you have suggestions for next week, and enjoy today’s videos.
The latest version of the self-solving Rubik’s Cube is adorable in how it tries to throw itself off of the table it’s solving itself on:
Not exactly an optimised solve, but we’ll forgive it, because that just means we get to watch it for longer. And here’s what it looks like if you’re holding it:
This is from the same folks who brought you the future of telepresence:
[ Human Controller ]
When you think of robotics, you likely think of something rigid, heavy, and built for a specific purpose. New “Robotic Skins” technology developed by Yale researchers flips that notion on its head, allowing users to animate the inanimate and turn everyday objects into robots.
To demonstrate the robotic skins in action, the researchers created a handful of prototypes. These include foam cylinders that move like an inchworm, a shirt-like wearable device designed to correct poor posture, and a device with a gripper that can grasp and move objects.
[ Yale ]
This is a clever bit of robot navigation: Rather than worry about obstacles and stuff, just draft behind humans who are going the same way you are, and let them do all the sensing and stuff.
We propose a novel navigation system for mobile robots in pedestrian-rich sidewalk environments. We developed a group surfing method which aims to imitate the optimal pedestrian group for bringing the robot closer to its goal. For pedestrian-sparse environments, we propose a sidewalk curb following method. Both approaches are shown in this video.
[ CARIS Lab ]
Some new footage from UMD’s Cassie here, including what looks like some accidental slip recovery that’s pretty impressive. And obligatory falling over at the end.
[ Paper ]
The LittleBot kit by Slant Robotics—which calls it “the world’s most affordable and simple robotics kit”—is now on Kickstarter.
The basic kit is only US $28, but a $79 pledge puts you down for a version with a bunch of sensors that’ll make it way more fun.
[ Kickstarter ]
Thanks Ben!
Last month we posted about mobile robots that could 3D print structures, and they’re now able to move and print at the same time, which is new.
[ NTU CRI ]
Using custom light design boards, ArtBoats glides across the water with changing color strips to create moonlit light paintings. The project, launched by MIT Media Lab PhD student Laura Perovich, aims to light up rivers to excite and engage local communities about their water.
[ MIT Media Lab ]
Rumble, crush and plow into the new UBTECH JIMU Robot BuilderBots Series: Overdrive Kit. With this kit you can create buildable, codable robots like DozerBot and DirtBot or design your own JIMU Robot creation. The fun is extended with the Blockly coding platform, allowing kids ages 8 and up to build and code these robots to perform countless programs and tricks.
[ UBTECH ]
A very Kiva-like pitch for an autonomous robot cart from 6 River Systems:
What’s it like to work with automated warehouse robots? Kind of like having your own personal helper. Meet Chuck, a collaborative mobile robot that works with employees to get the job done faster and better. With 6 River Systems and Chuck, warehouses are doubling or tripling their pick rates, at a fraction of the cost of traditional goods-to-person automation. When you need a better way, follow the leader!
[ 6 River Systems ]
Flying at 28 800 km/h, 400 km above Earth, from the International Space Station, ESA astronaut Alexander Gerst controlled a robot called Justin on 17 August 2018. Justin was stationed at the DLR German Aerospace Center in Oberpfaffenhofen, Germany.
ESA has run multiple experiments from the Space Station with robots to test the network, the control system and the robots on Earth. This is a new area for everybody involved and each aspect needs to be tested. This is the third in a series of SUPVIS-Justin orbital experiments. The first was carried out by ESA astronaut Paolo Nespoli in August 2017.
The SUPVIS-Justin experiment took around four hours in total. This included set-up, software updates and two hours of interaction between Alexander and Justin.
The tests were chosen to enact future scenarios in which astronauts orbiting distant planets and moons can instruct robots to do difficult or dangerous tasks and set up base before landing for further exploration. The experiment fits in ESA’s strategy to prepare for further exploration of our Universe.
[ ESA ]
We wrote about this exosuit when it was undergoing military testing, and Harvard has continued to optimize its performance:
[ Wyss Institute ]
KUKA partner andyRobot (AKA Andy Flessas) has developed a plug-in for industry leading animation software, Autodesk Maya, that allows KUKA robots to be programmed by anyone who knows how to animate inside Maya. Simply by dragging the 3D model of the robot through space in the virtual world and setting keyframes, a robot program can be created. Called Robot Animator, andyRobot’s Maya plug-in is aimed squarely at creative professionals, but could also find use in other less creative robot programming endeavors. The technology is enabled by KUKA’s EntertainTech, a piece of hardware and software that allows for the robot animation to be turned into a robot program and drive the robot.
[ andyRobot ]
The AEROARMS project is one of those EU 2020 flagship robotics initiatives with a long-winded website, but “aero” and “arms” is pretty much all you need to know.
In Bavaria, your Münchner weißwurst are palletized by robot.
Since when do sausages come in a can?
[ Kuka ]
Ron Arkin is the guest on this episode of The Interaction Hour, a podcast from GA Tech hosted by Ayanna Howard.
The emergence of artificial intelligence in society has elicited visceral reactions from people the world over, many of whom, thanks to portrayals in popular culture, can’t quite decide whether they believe we are building the future – or destroying it. Are we actually dealing with “killer robots?” Why has the public perception become so polarizing? Can we trust algorithms to make appropriate and trustworthy decisions, or do we risk too much by turning power over to the robots? Professor Ron Arkin, an expert in robotics and roboethics joins the podcast to discuss.
[ GA Tech ]
We’re still sad about what happened with Kuri, but whether or not it could have been a sustainable commercial product, the vision was certainly there. This SXSW talk from Mayfield CEO Kaijen Hsiao is still worth listening to.
[ Mayfield ]
In this week’s (particularly excellent) CMU RI Seminar, Herman Herman, director of the National Robotics Engineering Center (NREC) at CMU, shares “Lesson Learned from Two Decades of Robotics Development and Thoughts on Where We Go from Here.”
In this talk, Herman Herman will offer various lessons learned from developing various robots for the last 2 decades at the National Robotics Engineering Center. He will also offer his perspective on the future of autonomous robots in various industries, including self-driving cars, material handling and consumer robotics.
[ CMU RI ]
Don’t get me wrong—I love printed circuit boards. PCBs are, of course, essential in mass-produced products. Even for hobbyists, a small run assures almost perfectly repeatable circuits. And PCBs with a good ground plane are essential for high-frequency circuits operating at more than a few megahertz. A ground plane is a large area of copper that’s used as a low-inductance electrical return path from components to a circuit’s power supply. It prevents parasitic capacitance from smearing high-frequency signals into noise, and the absence of a ground plane is why you can’t build a high-frequency circuit using a breadboard and expect it to work well, or at all.
But rapid prototyping with PCBs has drawbacks compared with the speed and ease of building a circuit on a breadboard. You can quickly make your own PCBs—as long as you don’t mind the mess and some stained clothing and are willing to drill your own through holes. Or you can send your PCB layout to be made by a commercial service, but this takes several days at least and is more expensive.
So I began thinking about practical alternatives for high-frequency circuits that can provide maker-friendly prototypes that are fast to build, and easy to probe and alter. In this article, I’ll be presenting one key idea; some follow-on strategies will appear on the IEEE Spectrum website in the coming weeks. I should say that I make no claims of originality: Indeed I employ some oft-forgotten, decades-old techniques, but they turn out to be surprisingly useful in an age of surface-mount components operating at gigahertz frequencies.
The general approach is to start with a standard raw board, typically one made using FR4-grade resin, with its copper layer untouched. Instead of using etched traces, you interconnect components with lead wires while leaving a large ground. To demonstrate, I built a so-called comb generator circuit.
A comb generator produces a set of sharply defined harmonics across a wide range of frequencies, in this case up to 1 gigahertz, and it is a useful building block in microwave systems. The heart of my generator is a 74HC00 integrated circuit [PDF], which houses four NAND logic gates. A signal from a 25-megahertz surface-mount generator feeds two of the NAND gates in series so as to produce two square-wave signals that are slightly delayed. These signals go to a final NAND gate that generates narrow “sliver” pulses, which form the harmonic spectrum.
To create a circuit, I divided the copper layer into two lands. In this case, I wanted one small area along the top to serve as a 5-volt supply rail. Everything else forms the ground plane.
To isolate the lands from each other, I stripped off three thin rectangles of copper to form the boundary of the supply rail. I did this by marking parallel lines with a scriber. Then I held a steel ruler very firmly against the scribe marks and used a hobby knife to cut all the way through the copper along the length of the rule (this takes a fair amount of force, and often several passes). Then, using my soldering iron to heat the copper between the lines, I peeled each strip away using tweezers.
So how do you mount an integrated circuit on a board that’s mostly a single ground plane with no through holes? You bend the IC’s ground pins back so that they touch the surface, and solder them to the ground plane, holding the IC in place. You bend the other pins parallel to the board and solder connecting wires directly to them. Sometimes this is referred to as the “dead bug” method because of the way ICs look with their legs sticking out. As a bonus, the dead-bug method makes soldering surface-mount components easier than with a conventional PCB, as the contacts are more accessible. The ground plane also provided a convenient place to attach the heat sink of my comb generator’s power regulator.
With a bit of practice cutting and peeling strips from the copper layer, it is possible to form an isolated land in the middle of a board to act as a tie point for both surface-mounted and through-hole components. Such an isolated land has very little capacitance to ground.
Another advantage of this construction technique is that you can easily check if your high-frequency circuit is in fact working as designed. Using a spectrum analyzer with a 500-ohm resistive probe (such as the Tektronix P6056) works well for such circuits, as long as the probe’s shield is connected to ground close to the circuit node being probed. By attaching a spring-mounted Pogo pin to the probe’s ground shield that extends down to the board, I’m able to make a ground connection right beside whatever pin I’m touching the probe to. (If you can’t find a P6056 or similar probe, you can make one yourself using a 450-ohm resistor in series with a 50-ohm length of coaxial cable, but remember to use a 50-ohm terminator at the analyzer end.)
The results of these methods are not always the prettiest-looking boards, but I’ve had good success in using these techniques to prototype microwave circuits. I hope you’ll follow me online to see some further elaborations of the ground-plane method, along with some other tips and tricks that I and some of my fellow circuit builders have been using for high-frequency circuits.
This article appears in the October 2018 print issue as “Resurrecting the “Dead Bug” Method.”
Engineers are good at making minuscule antennas. Nowadays, we can cram antennas into our smartphones or smartwatches without much trouble. But from the perspective of those devices, the antennas inside them are still bulky and take up too much space.
Now, researchers at Drexel University, in Philadelphia, have developed a two-dimensional, “spray-on” antenna that can be used for wearables, IoT devices, and anything else that could conceivably benefit from thin, lightweight, and flexible antennas. They published their work today in Science Advances.
The transparent titanium carbide antenna is a good option for any device in which you don’t want to actually see the antenna, says Asia Sarycheva, a Ph.D. student in Drexel’s Department of Materials Science and Engineering, who conducted the research. “With smart windows, for example, you have to have transparent circuits to send signals. Or solar cells—you need everything [to be] transparent.”
Of course, the antenna isn’t truly two-dimensional, but it is incredibly thin. It’s made from a type of material called a MXene—characterized by a metal like titanium or molybdenum bonded with carbides or nitrides.
The “spray-on” description is exactly what it sounds like. By dissolving MXenes in water, researchers produce a kind of water-based ink. Then, “we can just use a simple spray gun from Home Depot, and just spray the shape we want,” Sarycheva says.
The antenna the group constructed is roughly 100 nanometers thick. For comparison, copper antennas used today have a minimum thickness of about 3,000 nm. Then, “as soon as the ink dries, it’s ready to be used as an antenna,” says Babak Anasori, a nanomaterials researcher at Drexel.
Anasori says their group became interested in this approach when they found, in 2016, that a titanium carbide MXene has high electromagnetic interference shielding, which suggested to them that it would block unwanted electromagnetic signals. Around the same time, they noticed that researchers were publishing papers using nanomaterials, like carbon nanotubes, as antennas.
In addition to resisting electromagnetic interference, MXenes like titanium carbide had another advantage over carbon nanotubes. “We had MXene, which has a higher conductivity than these materials,” he says. “MXene as a bulk material, when you have a solution and spray it—it has a higher conductivity than carbon nanotubes in bulk.” That was all the inspiration they needed to pursue MXenes as a viable candidate to compete with traditional antennas and emerging alternatives.
And an early test supported their suspicions that MXenes could work. “We were getting a performance equivalent with copper antennas with one simple test,” Anasori says. So they looked further into it.
There are two methods to make a traditional antenna. One option is to create an incredibly thin sheet of metal and cut the antenna you need out of it. The other option is to construct it out of a paste of nanometals. The problem in both situations is the amount of processing it takes to produce thin sheets of metal or nanometal paste. And even then, the antennas are still bulky, relative to the small devices engineers want to put them in.
Two-dimensional, sprayable antennas solve those problems. Not only are they lighter, they don’t require an immense amount of processing to create. “We can make ours in a few steps,” says Anasori. There’s no complicated manufacturing that goes into developing the MXenes—just add water and you’re ready to spray whatever antenna you need.
MXenes can also overcome a principle that limits how thin metal antennas can get, called the skin depth. In a nutshell, metal that is too thin cannot effectively pass electric current across its surface, which hampers its ability to function as an antenna. A 2D material like MXene doesn’t seem to share this restriction. “The conventional formula for MXene skin depth is 100 microns,” says Sarycheva, or about six orders of magnitude thicker than the antennas the team constructed.
Yet their antennas still passed signals without trouble. That said, it’s not entirely clear why a principle like skin depth doesn’t apply to a material like MXene. “We want to dig a little bit further into how the skin depth is working on a 2-dimensional nanoscale,” says Sarycheva.
Anasori says their antennas could easily be used in devices like smartphones, or other devices where the antenna is enclosed. Because their current solution is water-based, and MXenes are hydrophilic, the antennas will slough off whatever surface they’re printed on as soon as they get wet again.
That’s why the team is now exploring new composites to use as a solution, rather than water, which would open the door on sprayable antennas for applications in wet or humid environments. One option the team is looking at is suspending the MXene in simple polymers.
“There is a lot of work to be done on the fundamental side,” says Anasori, “and many other applications we want to explore.” Besides devices that would benefit from their transparent nature, like smart windows or solar panels, Sarycheva believes these antennas would be a natural fit for RFIDs on products in workerless stores. But their speculations don’t end there.
“With respect to transparent antennas,” says Anasori, “we believe there are applications we cannot imagine.”
These application frameworks provide a substantial starting point for researchers to find ways to improve and build prototyping systems. Some example research includes exploring brand-new algorithms and architectures that can support the tremendous increase of the number of terminals, inventing new waveforms by which to modulate and demodulate the signals, or finding new multi-antenna architectures that fully exploit the degrees of freedom in the wireless medium.
The frameworks are designed from the ground up for easy modifiability. This allows wireless researchers to quickly get their real-time prototype up and running based on the LTE and 802.11 standards as well as MIMO technology. They can then primarily focus on the selected aspects of the protocol that they wish to improve, easily modify the designs, and compare their innovations with existing standards.
The PHY and MAC blocks are documented in the product and presented in a graphical block diagram form using LabVIEW Communications. They have clearly defined interfaces, documented system performance benchmarks, and computational resource usage. Additionally, LabVIEW Communications is shipped with a video-streaming application that shows the transfer of real-time data over the air using these standards-compliant wireless links.
Relevant parameters for the wireless links are easily adjustable from the software front panel generated with LabVIEW Communications. Furthermore, relevant link metrics, including received power spectrum, received constellation, throughput, and block error rates, are also displayed for easy assessment of the link quality. They allow researchers to understand the effects of various parameters on communications performance.
These application frameworks, combined with the ease of development LabVIEW Communications provides and the seamless integration with NI SDR hardware, enable wireless researchers to innovate faster and reduce time to market for their next breakthrough innovations.
The latest version of the LabVIEW Communications LTE Application Framework includes:
The latest version of the LabVIEW Communications 802.11 Application Framework includes:
The latest version of the LabVIEW Communications MIMO Application Framework includes:
Modifying the IP does require a deep understanding of the products. We offer a three-day in-class training course for LabVIEW Communications System Design Suite as well as a custom training for the application frameworks. If some level of the modification is required to fit your application needs, it is highly encouraged to take the training courses. Should you have any questions around the trainings, please contact your local sales representative.
The registered trademark Linux® is used pursuant to a sublicense from LMI, the exclusive licensee of Linus Torvalds, owner of the mark on a worldwide basis.
An additional benefit of a tool that scales across processors, FPGAs, and I/O is the ability to describe the entire system, including the interactions between components within a single tool. Consequently, this enables full system simulations with substantially less effort, as designers don’t need to stitch simulations across tools to estimate and understand how a system might behave.
LabVIEW Communications offers an ability to set up, organize, and manage your system in a feature called SystemDesigner. It primarily provides a graphical representation of your hardware system and enables intuitive configuration, software organization, deployment, and documentation. By consolidating these various functions into the development environment, it becomes a starting point for your development and a hub for hardware configuration.
Figure 1. The left-hand side shows SystemDesigner that describes how the system is set up and what software is targeted for which hardware.
The driver information, the hardware documentation, the software targeted to run on a specific hardware are all one click away from SystemDesigner to better manage your system.
Although LabVIEW Communications works with a variety of design languages and approaches, including C, MATLAB®, VHDL, and dataflow, the graphical dataflow language can span both the processor and FPGA seamlessly. The advanced compiler technology within LabVIEW Communications optimizes and handles the mapping of the G dataflow language to the underlying processing component—whether it’s a processor or an FPGA. This provides designers considerable flexibility in experimenting with design partitioning as they can seamlessly move an algorithm or components of an algorithm between the FPGA and the processor.
To ensure a seamless transition of algorithms designed in G between processor and FPGA hardware, LabVIEW Communications also provides built-in tools for data-driven float-to-fixed point conversion. Furthermore, the performance optimizations on the implementation are specific to the underlying hardware. For example, for a diagram targeted to the processor, LabVIEW Communications can properly parallelize and partition a design to automatically use the full potential of a multicore processor. If a deterministic execution of such code is needed for a specific application, simply change the hardware target to NI Linux Real-Time without rewriting the code. And for a diagram targeted to the FPGA, it can accept various user-specified constraints like throughput and clock rate to properly synthesize a hardware design on the FPGA fabric.
Overall, this ability to quickly partition the design and rapidly iterate on the ideal implementation is possible with only LabVIEW Communications as it offers access to both the FPGA and processor. As such, without the hardware integration available in the tool, such design flexibility would be nearly impossible to realize. The benefit to users is the ability to better characterize a design and to truly understand design trade-offs, which can motivate further refinements. As legions of researchers join the fray to define the next generations of communications standards, tools that enable efficient, rapid innovation on quality SDR systems are essential in the race to deliver the next disruptive solution to market. It’s no surprise then that LabVIEW Communications System Design Suite and NI SDR hardware are already in the arsenals of those leading the marketplace.
The registered trademark Linux® is used pursuant to a sublicense from LMI, the exclusive licensee of Linus Torvalds, owner of the mark on a worldwide basis. MATLAB® is a registered trademark of The MathWorks, Inc.
Figure 1: 3D printing filament. Source: Maurizio Pesce / CC BY 2.0
Additive manufacturing, or 3D printing, helps companies manufacture parts more quickly than traditional methods, easily adding customization options and helping designers to be more creative without incurring costs. Engineers play critical roles in helping their firms become more efficient and competitive, and additive manufacturing can play a large part in accomplishing those goals.
Automotive and Aerospace Industries
Additive manufacturing can provide considerably shorter lead times than associated traditional engineering methods such as casting or machining. Products can, therefore, be developed faster and make it into the testing queue faster. In the future, car buyers may well stroll into a showroom, dictate a car’s color, size, design and niceties and the vehicle will be fabricated then and there. There are prototypes of 3D-printed cars — Petr Chladek’s 4ekolka is an example. Today, however, car enthusiasts have to settle for 3D printing of automotive components, printed using large-format machines and woven-fiber composite printers.
Electronic Circuitry
Those tiny circuit boards inside consumer electronics such as handheld toys and cell phones could save manufacturers time and money if they were made using 3D-printing technology. Almost a year ago, in November 2017, a team of researchers from the University of Nottingham quickly 3D-printed fully functional electronic circuits with electrically conductive metallic inks and insulating polymeric inks, which could be useful for medical devices in particular. No standard values for capacitors are needed when designing the circuit: the value is set and the printer produces the component.
Medical Gear
3D printing is currently used to create dental prosthetics, hearing aids and unique scaffolding for joint replacement and reconstructive cosmetic surgeries. Further, researchers at Wake Forest University have created a 3D printer that can produce organs, tissues and bones that could theoretically be implanted into human beings. Instead of putting down layers of molten plastic or metal, Wake Forest’s printer uses hydrogels — water-based solutions containing human cells. Tissues are printed that can accommodate blood vessels capable of receiving the oxygen and nutrients that cells need to survive.
Food Science
While 3D printing can make beautiful food sculptures with far less effort than they can by hand, there are other, more practical, uses for additive technology that incorporate food. 3D food printers could improve the nutritional value of meals and provide solutions to hunger in areas with few fresh, affordable ingredients. Printers that use hydrocolloids, substances that form gels with water, could be used to replace the base ingredients of familiar dishes with plentiful renewables including algae, duckweed and grass. And, in the future, food could be printed with customized nutritional content, optimized based on biometric and genomic data.
Clearly, additive manufacturing will impact engineering jobs. Engineers working in biomedicine, food, auto and avionics in addition to civil engineering and industrial design will see increasing uses for 3D printing. Advances in chemical science will lead to more advanced plastics being manufactured by 3D printers. Pharmaceutical companies are developing molecule level printers that can print drugs on demand. No matter what your area of engineering interest, additive manufacturing will likely play a role in its future.
Pig farmers want human diners to bite into the delicious pork they produce, not for swine to bite each other. (Yes, it happens.) Now, using 3D cameras and machine-vision algorithms, scientists are developing a way to automatically detect when a pig might be about to chomp down on another pig.
Pigs have an unfortunate habit of biting one another’s tails. Infections from these bites can render up to 30 percent of a pig farm’s swine unfit for human consumption. Docking, or cutting, pig tails can reduce such biting but does not eliminate it, and the routine use of docking is banned in the European Union. There are a wide range of potential triggers for outbreaks of tail biting—among them genetics, diet, overcrowding, temperature variations, insufficient ventilation and lighting, disease, and even the season—so it’s an unpredictable problem. “Tail biting is a very frustrating challenge,” says John Deen, a veterinarian and epidemiologist at the University of Minnesota. “Controlling it has not always been that effective.”
To predict and potentially prevent tail biting, researchers in Scotland monitored 667 undocked pigs on a farm using both time-of-flight and regular video cameras that recorded continuously for 52 days. The pigs were checked at least twice a day for evidence of biting.
Each time-of-flight camera emitted pulses of infrared light from LEDs 25 times a second, and recorded the amount of time needed to detect reflected pulses. This data allowed scientists to track each pig’s position and posture. Machine-vision algorithms from farm-technology company Innovent Technology, in Aberdeenshire, Scotland, then determined which activities might serve as possible early warning signs of tail biting.
The scientists found that before outbreaks of biting, pigs increasingly held their tails down against their bodies. Moreover, the software could detect when these changes in tail posture occurred with 73.9 percent accuracy. “It looks like good technology, and I’m very interested in how it could be applied on a farm,” says Deen, who did not take part in this project.
If farmers think a biting outbreak is likely to happen in a pigpen, they could deploy distractions such as straw, knotted ropes, or shredded cardboard, which tap into the pigs’ instincts to root and chew.
“Another thing that people try is to apply bad-tasting stuff such as Stockholm Tar to tails,” says Richard D’Eath, an animal behavior scientist at Scotland’s Rural College, in Edinburgh, who worked on this research. An early warning system could help farmers use such remedies only when needed, which would save money.
This research was part of a £160 million push by the United Kingdom to support innovative farming technology through its Agri-Tech Catalyst program. Agriculture and food already help generate more than £108 billion annually and support 3.9 million employees. A recent industry-led review suggested that incorporating digital technologies such as robotics and autonomous systems into food manufacturing could add £58 billion to the U.K. economy over the next 13 years.
A three-year project called TailTech is now furthering the development of this early warning system with up to £676,000 in funding from Innovate UK, a government agency. The aim is to test a prototype system on more than 16,000 pigs at nine farms throughout Europe for roughly 18 months, with each time-of-flight camera capable of monitoring up to 300 pigs, D’Eath says.
TailTech will compare the system’s efficacy for different types of pig farms, including ones with docked and undocked pigs, with and without straw on their floors, and with pigs of varying genetics, diet, and group sizes. The project will analyze what fraction of pigs hold their tails low, and for how long, before scientists are sure a tail-biting outbreak will occur. And it will assess how predictors vary between farms. The researchers also aim to improve the accuracy of their system. “In practice, though, 73.9 percent is good enough for the system to work well,” D’Eath says.
The ultimate aim is to have an early warning system “that reads out continually on a screen and also sends alerts to the farmer’s smartphone,” he adds. “No technical expertise will be needed once the system is installed.” The software will compute trends to give farmers a better idea of their herds’ current level of risk, D’Eath says.
A farmer might not buy technology designed solely to detect tail biting, but D’Eath notes that this system is being developed as an add-on to a camera-based automatic pig-weighing system called Qscan that Innovent already produces to help farms meet contractual weight targets. Deen, the veterinarian and epidemiologist, thinks this strategy could make all the difference. As he says, “If Qscan is already in place, I think farmers can quite easily justify adding this system at little cost.”
This article appears in the October 2018 print issue as “Machine Vision to Curb Pig Pugnacity.”
Usually, news of an automotive-related software issue involves an error like last week’s GM recall of 1 million SUVs and pickups because of a steering defect in their electric power-steering module. GM stated that the defect can cause a momentary loss of power steering followed by its sudden return, which can lead to an accident, and already has in about 30 known cases. GM says a software update to the module available from its dealers will fix the problem.
But a software remedy can’t solve Subaru’s issue with 293 of its 2019 Ascent SUVs. All 293 of the SUVs that were built in July will be scrapped because they are missing critical spot welds.
According to Subaru’s recall notice [PDF] filed with the U.S. National Highway Transportation Safety Administration, the welding robots at the Subaru Indiana Automotive plant in Lafayette, Ind., were improperly coded, which meant the robots omitted the spot welds required on the Ascents’ B-pillar. Consumer Reports states that the B-pillar holds the second-row door hinges. As a result, the strength of the affected Ascents’ bodies may be reduced, increasing the possibility of passenger injuries in a crash.
Subaru indicated in the recall that “there is no physical remedy available; therefore, any vehicles found with missing welds will be destroyed.” Luckily, only nine Ascents had been sold, and those customers are going to receive new vehicles. The rest were on dealer lots or in transit.
Intriguingly, the automotive manufacturer indicated that the 293 Ascents without the spot welds were assembled between 13 and 21 July, but not all the Ascents assembled during that time are missing the welds. Subaru did not provide any details on why some Ascents had missing welds, and others did not. It found an Ascent with the missing welds during a routine sampling inspection on 21 July, which immediately led to an investigation into the cause. (That must have been a real, “What the heck?” moment.)
Problems with software causing automotive manufacturing defects are rare, but not unheard of. An oft-told tale of software failure occurred in the 1980s, when GM launched a major initiative to heavily automate its car assembly lines, using robotics to compete with Japanese carmakers. However, this effort didn’t always go according to plan, as some of GM’s vehicle painting robots at its showcase luxury car Detroit-Hamtramck assembly plant painted each other rather than the cars.
The problems didn’t end there: Robots used for installing windshields had a bad habit of smashing them instead, some automated spot welders decided to weld car doors shut, and the automatic guided vehicles system used to convey parts throughout the plant never fully worked.
GM decided that it needed to scale back its automation, and gave its strategy a rethink. Apparently, though, the fundamental lessons about the limits of automation are being harshly relearned today.
In a bit of déjà vu, Elon Musk admitted earlier this year that the “excessive automation [used] at Tesla” was a mistake; instead of speeding up the company’s Model 3 production, the automation slowed it down, he said. Like GM before it, Tesla Motors had been supremely confident that extensive automation would give the company a massive competitive advantage against its rivals.
In an April interview with CBS News concerning Tesla’s Model 3 “production hell,” Musk said that, “We had this crazy, complex network of conveyor belts…. And it was not working, so we got rid of that whole thing.”
Musk confessed that Tesla had put too much technology into its Model 3 as well, saying, “We got complacent about some of the things that we felt were our core technology…. We put too much new technology into the Model 3 all at once. This—this should have been staged.”
One might wonder whether the Model 3’s technological complexities played a part in last week’s over-the-air software update that disabled several key vehicle systems including automatic emergency braking. Model 3 owners quickly noticed and complained about the problem. Tesla blamed it on a firmware issue.
An interesting question this incident raises is: If an over-the-air update erodes safety features in a fully autonomous vehicle of the future, will passengers be expected to notice?
Starting in May 1920, the U.S. federal WWV radio stations have broadcast the official time without fail. For ham radio operators, hearing the friendly “National Institute of Standards and Technology Time!” announcement is a comforting old refrain. For others, it’s a service they’ve never heard of—yet in the background, it’s what keeps the clocks and appliances in their daily lives automatically ticking along on time.
But after 98 years, this constant companion could soon go off the air. The proposed 2019 U.S. presidential budget calls for a 34 percent cut in NIST funding; in response, the institute compiled a budget-use plan that would eliminate the WWV stations.
At first blush it might sound like the natural end to a quaint public service from a bygone era. Do we really need radio-broadcast time signals in an era of Internet-connected devices and GPS?
Many would argue: Yes, we really do. More than 50 million devices in the United States—including wall clocks, wristwatches, and industrial appliances—keep time through the signal from NIST’s WWVB station, operating from a site near Fort Collins, Colo., where it reads the time directly from an atomic clock. These radio-equipped clocks are permanently tuned to WWVB’s low-frequency, 60-kilohertz signal.
“WWVB is the pacemaker for the world around us, even if we don’t realize it,” says Thomas Witherspoon, editor of shortwave radio news site The SWLing Post. “It’s why factory workers and schools don’t need to drag out the stepladder every time we switch between daylight and standard time. Without WWVB, these devices won’t magically update themselves.”
Those household devices and industrial clocks generally don’t have Internet capability, Witherspoon points out, so without WWVB “we’d likely be getting on ladders twice a year to manually have our clocks spring forward and fall back.”
What’s more, the nonradio alternatives just aren’t reliable, says John Lowe, station manager for WWVB and its sister high-frequency stations WWV, also in Fort Collins, and WWVH out of Kauai.
Internet connections aren’t available everywhere. And “GPS does not penetrate into buildings, which is an obvious problem,” Lowe says. “Plus, it’s vulnerable, as it’s prone to jamming as well as spoofing.”
The WWV stations are more than perfect timekeepers, too. The stations emit a frequency that can be used by military personnel, mariners, ham radio operators, or anyone else to calibrate devices. Their operators also broadcast information like space weather alerts, GPS satellite health reports, and marine storm warnings.
But, at least as written in the NIST budget plan, these airwaves are slated to go silent. So is Lowe worried about his job? “I am not,” he says flatly.
Lowe points out that it’s only the presidential budget proposal that suggests cuts to NIST; the House and Senate proposals both leave the agency’s budget intact. Plus, the WWV stations have survived similar presidential proposals before, he adds.
Still, the stations’ listeners are taking the potential loss seriously. Fans have circulated multiple Whitehouse.gov petitions, while outlets like Witherspoon’s SWLing Post are using their platforms to encourage supporters to contact their local representatives. Congress has until 1 October to finalize the budget for fiscal year 2019, at which point the fate of these stations will become clear.
“As a one-way broadcast, typically it’s very difficult to ascertain our user base,” Lowe notes. “But when events like this come up, we get a lot of feedback. It’s a silver lining: We’ve received a lot of positive support, and it shows us this is still a highly valued service.”
Download this paper to learn about the utilization of ANSYS HFSS with the novel HFSS SBR+ solver to analyze, predict and optimize radar signatures of electrically large targets and scenes. Subject models include targets such as aircraft, missiles, rockets and ships.
Key take-aways:
Hardware-in-the-loop (HIL) simulation is a real-time test technique used to test embedded control devices more efficiently. Download this application note to discover recommended best practices for powertrain HIL testing.
Despite the ubiquity of drones nowadays, it seems to be generally accepted that learning how to control them properly is just too much work. Consumer drones are increasingly being stuffed full of obstacle-avoidance systems, based on the (likely accurate) assumption that most human pilots are to some degree incompetent. It’s not that humans are entirely to blame, because controlling a drone isn’t the most intuitive thing in the world, and to make it easier, roboticists have been coming up with all kinds of creative solutions. There’s body control, face control, and even brain control, all of which offer various combinations of convenience and capability.
The more capability you want in a drone control system, usually the less convenient it is, in that it requires more processing power or infrastructure or brain probes or whatever. Developing a system that’s both easy to use and self-contained is quite a challenge, but roboticists from the University of Pennsylvania, U.S. Army Research Laboratory, and New York University are up to it—with just a pair of lightweight gaze-tracking glasses and a small computing unit, a small drone will fly wherever you look.
While we’ve seen gaze-controlled drones before, what’s new here is that the system is self-contained, and doesn’t rely on external sensors, which have been required to make a control system user-relative instead of drone-relative. For example, when you’re controlling a drone with a traditional remote, that’s drone-relative: You tell the drone to go left, and it goes to its left, irrespective of where you are, meaning that from your perspective it may go right, or forwards, or backwards, depending on its orientation relative to you.
User-relative control takes your position and orientation into account, so that the drone instead moves to your left when it receives a “go left” command. In order for this to work properly, the control system has to have a good idea of both the location and orientation of the drone, and the location and orientation of the controller (you), which is where in the past all of that external localization has been necessary. The trick, then, is being able to localize the drone relative to the user without having to invest in a motion-capture system, or even rely on GPS.
Making this happen depends on some fancy hardware, but not so fancy that you can’t buy if off-the-shelf. The Tobii Pro Glasses 2 is a lightweight, noninvasive, wearable eye-tracking system that also includes an IMU and an HD camera. The glasses don’t have a ton of processing power onboard, so they’re hooked up to a portable NVIDIA Jetson TX2 CPU and GPU. With the glasses on, the user just has to look at the drone, and the camera on the glasses will detect it, using a deep neural network, and then calculate how far away it is based on its apparent size. Along with head orientation data from the IMU (with some additional help from the camera), this allows the system to estimate where the drone is relative to the user.
And really, that’s the hard part. Then it’s just a matter of fixating on somewhere else with your gaze, having the glasses translate where your eyes are looking into a vector for the drone, and then sending a command to the drone to fly there. Er, well, there is one other hard part, which is turning where your eyes are looking into a 3D point in space rather than a 2D one as the paper explains:
To compute the 3D navigation waypoint, we use the 2D gaze coordinate provided from the glasses to compute a pointing vector from the glasses, and then randomly select the waypoint depth within a predefined safety zone. Ideally, the 3D navigation waypoint would come directly from the eye tracking glasses, but we found in our experiments that the depth component reported by the glasses was too noisy to use effectively. In the future, we hope to further investigate this issue in order to give the user more control over depth.
It’s somewhat remarkable that the glasses are reporting depth information from pupil-tracking data at all, to be honest, but you can see how this would be super difficult, determining the difference in pupil divergence between you looking at something that’s 20 feet away as opposed to 25 feet away. Those 5 feet could easily be the difference between an intact drone and one that’s in sad little pieces on the ground, especially if the drone is being controlled by an untrained user, which is after all the idea.
The researchers are hoping that eventually, their system will enable people with very little drone experience to safely and effectively fly drones in situations where finding a dedicated drone pilot might not be realistic. It could even be used to allow one one person to control a bunch of different drones simultaneously. Adding other interaction modes (visual, vocal, gestures) could add capabilities or perhaps even deal with the depth issue, or a system that works on gaze alone could potentially be ideal for people who have limited mobility.
If nanotechnology has one clear image in the collective pop-culture consciousness, it is that of nanorobots, nanoscale machines capable of performing mechanical functions. When considering the potential of such a technology, the more astute may ask themselves: How would you manage to direct the movements of these nanorobots?
Researchers at the University of Texas at Austin have discovered a physical phenomenon in the way that semiconductor nanoparticles interact with light when under the influence of an electric field that may answer that question.
In research described in the journal Science Advances, the University of Texas scientists discovered that the strong interactions of light, semiconductor nanoparticles, and electric fields lead to the efficient reconfigurable operation of semiconductor nanomotors, or nanodevices.
Using only optical microscopy, the researchers could distinguish between semiconductor silicon and gold nanoparticles by observing their mechanical responses to light. This method is contactless and cheap compared with traditional measurement techniques.
In addition, the researchers believe that this combination light/electric field effect could be used to reconfigure micro- or nanomechanical switches or antennas, or be coupled with micromachines for electronic and biomedical applications.
“I consider the discovered effect a mechanical analogy of the field-effect transistors [FETs], the building blocks of CPUs that have revolutionized society,” said Donglei Fan, associate professor at the University of Texas and coauthor of the research. “A FET switches on and off in response to an externally applied voltage. Our device switches among multiple mechanical rotation modes in response to light intensity, which is instant and can be repeated many times.”
To describe how the effect works, Fan explained that when light hits a semiconductor nanowire, it frees electrons and changes the electric conductivity of the nanowire and its polarization. When the nanowire is placed in an external electric field to drive its mechanical rotation, the driving torque is changed because of the light.
There are a number of applications that Fan and her colleagues believe the technology could be applied to. For instance, in optical sensing under the right conditions, it could become possible to correlate directly the mechanical motions with light intensity.
Fan also suggests it could be used in drug delivery. “Back in 2015, we discovered that mechanical rotation of drug carriers can change the molecule release rate,” she said. “Now, when light can change the rotation speed, one can change the molecule release rate.”
Fan acknowledges that to fully explore the applications in optical sensors or communication, it will be necessary to explore both top-down lithography and bottom-up assembling. Biosensors could be obtained by both approaches, according to Fan.
In all cases, Fan sees this technology enabling static devices to be dynamic and reconfigurable with simple control of light exposure, which is a step toward intelligent electronics and biomedical devices.
She added: “I personally believe this work can lead to a focused field. There are many projects we can do.”
Could you perceive the touch of an ant’s antenna on your fingertip? This new tactile sensor can, and its inventors report that it could one day be integrated into prostheses to give wearers a superhuman sense of touch.
The sensor converts pressure from touch to electric signals that, theoretically, could be perceived by the brain. Researchers at the Chinese Academy of Sciences in Ningbo, Zhenhai, described their invention yesterday in the journal Science Robotics.
There have been a lot of touch sensors described in the literature, but this one’s sensitivity is off the charts. It perceives the most subtle touches, including wind, tiny drops of water, and the actions of an ant. In tests of the device, when the ant wasn’t walking, the tactile sensor even detected the touch of the insect’s antenna.
In numbers, that’s a sensitivity of 120 newton-1, a detection limit of 10 micronewtons, and a minimum loading of 50 micronewtons, or 1.25 pascals—less than the sensing threshold of human skin. “That’s a very sensitive sensor,” says Nitish Thakor, a biomedical engineer at Johns Hopkins University, who was not involved in the study.
The inventors of the sensor, led by Run-Wei Li, at the Academy’s Ningbo Institute of Materials Technology and Engineering, accomplished the feat by applying the physics of giant magneto-impedance (GMI).
The sensor is composed of a polymer membrane with magnetic particles on its top surface and a magnetic sensor integrated inside, along with an air gap. When pressure is applied to the surface, the membrane deforms inward, causing the magnetic particles on the top to move toward the inductive magnetic sensor on the inside, which is made of giant magneto-impedance material. As a result, magnetic flux passing through the inductive sensing element increases, and impedance of the sensing element decreases. An oscillation circuit composed of an inductor and a capacitor are applied in parallel, and used to convert time-domain signals into frequency-domain signals.
Run-Wei Li’s group “has created a clever circuit that produces a rate code in response to pressure,” says Thakor. “Our nerves send a message from our skin to the spinal cord to the brain as these spikes. It doesn’t send it as a 16-digit computer code, and it doesn’t send it as an analog signal. It sends it as a rate code. So it’s a nice first step.”
But the device falls short in several ways, says Thakor. For one, “the oscillations are not truly mimicking nerves,” he says. Hacking into the body’s sensory encoding requires a significant amount of recording of actual nerve impulses, and lots of experimentation to understand the sensations created by artificial stimulation. When we touch something and our sensory nerves perceive it, what does that look like in electrical signals? What are the patterns of electrical pulses that communicate with our brains?
The authors did not investigate this, at least in this paper, nor did they integrate the device onto a prosthetic or hook it up to the body’s communication system. “An engineer or sensor designer may like it, but a neuroscientist would be critical,” says Thakor.
Second, while the sensor array is flexible, the sensors are crude in size—about 5 millimeters in diameter, which isn’t practical for a prosthetic hand. Third, the sensors register only the really subtle sensations. Stronger pressure would saturate them, so the sensors don’t capture the full range of human touch, says Thakor.
IEEE Spectrum was not able to reach the authors of the new paper for comment.
Many groups are working on improving the electrical connections between prosthetics and the human body, in an attempt to improve the experience for people with bionic limbs. Earlier this year, Thakor and his colleagues reported a multilayered electronic skin that enabled a prosthesis to perceive and transmit the feeling of pain to its gracious human volunteer. To do this, the scientists first had to determine the stimulation patterns needed to elicit pain in the phantom hand of the volunteer, and then recreate that with pulses generated by a computer model of the nerve signals.
Zhenan Bao’s group at Stanford in 2015 reported a flexible organic transistor circuit that transduces pressure into digital frequency signals. And Dustin Tyler’s team out of Case Western Reserve University, in Cleveland, developed a neural interface that gave two volunteers with upper arm amputations the sensation of touch in phantom hands. The device lasted for over a year.
Run-Wei Li’s group has made an awesome sensor that has the potential to communicate with the human body. Now lets hope researchers take it to the next step and connect it to a prosthetic that amputees can actually use.
The next big effort to reduce carbon emissions and hold the line on climate change will be enabled by the Internet of Things. Companies can rethink their costs of operations, taking into account the energy used, with a combination of more granular data from cheap sensors and faster, more in-depth analytics from cheap computing.
At Schneider Electric’s factory in Lexington, Ky., workers make electric components, including load centers and switches. The plant is four years into a company-mandated five-year goal to reduce energy consumption by 5 percent each year. The first two years, it achieved that goal. Then, management decided to deploy sensors and richer analytics to take a closer look at the mix of products made in the plant and the order in which those products were manufactured. They realized that by changing the production mix and order, they could save a lot of energy.
How much? After tweaking the production mix, the plant reduced consumption by 12 percent in year three and 10 percent in year four. “The entire group had been focused on the processing side, and now every process decision is dictated by energy savings,” says Andy Bennett, former senior vice president of Schneider Electric’s EcoStruxure platform, which drove the factory innovations. “What has changed in the last five years is the technology and the drive and need to have a sustainable message.”
Bennett says that despite the United States pulling out of the Paris Agreement, many U.S. business leaders are still focused on reducing carbon emissions. Reduced energy consumption improves the bottom line, but it’s something that manufacturers hadn’t focused on because they didn’t have the tools or impetus.
A similar shift in thinking played out in the 2000s in data centers, as companies such as Amazon, Facebook, and Google realized that power was a significant aspect of their costs of doing business. To address this, they prioritized the metric of performance per watt and forced Intel and Advanced Micro Devices, their suppliers, to focus on that metric.
The results were impressive. In 2016, a report by Lawrence Berkeley National Laboratory showed that energy consumption in those companies’ data centers had remained flat, despite the growth in computing power, and had saved them roughly US $60 billion in energy costs each year.
Now it’s the manufacturing world’s turn. Schneider Electric isn’t the only company using sensor data and artificial intelligence to optimize its energy conservation. This year at the Bosch ConnectedWorld IoT conference, in Berlin, the German industrial giant exhibited software that tracks the energy consumption of industrial processes and calculates how much energy they require.
Ikea, another huge European company, also tracks its energy use, and has gone a step further, calculating the energy used in production to determine whether it should make the product at all. The company, famous for its conservation efforts, looks at energy consumption used in manufacturing and the expected lifetime of a product. If something requires a lot of energy or can’t be recycled effectively, it doesn’t get made, according to Lena Pripp-Kovac, Ikea’s sustainability manager.
Ikea may have an entire business unit dedicated to sustainability, but that’s not feasible for many manufacturing companies. Fortunately for them, the Internet of Things will allow them to re-create similar analyses with much less fuss. And that will be good for all of us.
This article appears in the October 2018 print issue as “The Frugal Factory.”
On 20 September 2017, Marcos Santini was home with his infant son in the city of Caguas, Puerto Rico. His wife, Wilmady Pagan, was still at work. It was no ordinary day. For close to a week, the Santinis, like everyone else on the island, had nervously watched the weather reports as a tropical storm crossed the Caribbean on a path headed straight for Puerto Rico.
Even before Hurricane Maria made landfall at 2 a.m on the 20th, Marcos’s neighborhood was being buffeted by winds of up to 280 kilometers per hour and drenched by torrential rain. Marcos, his son, and the family dogs took refuge on the second floor of their house.
By the time the skies cleared nearly a day later, Marcos faced a devastating sight. Windows in his house had been blown in, the first floor was flooded, fallen branches and trees lay across his yard, his neighbors’ yards, and the street. The main road leading to and from the community was impassable, and his wife couldn’t make it home for several days.
Marcos quickly got to work clearing up the storm debris, making repairs, collecting rainwater because the taps weren’t working, and figuring out a way to do without electricity or phone service. He also helped care for his asthmatic son and some elderly relatives.
This wasn’t how Marcos had envisioned spending his fall. He was in the middle of completing a technical certificate program in lasers and photonics at the Universidad Metropolitana’s Puerto Rico Photonics Institute, where the two of us are professors. Marcos had enrolled at PRPI to, in his words, “become a laser jock.” He’s passionate about the technology and someday hoped to start a business that involved lasers. But immediately after the storm, photonics was the furthest thing from his mind. He wasn’t sure if he would ever return to school.
PRPI was created at UMET in 2011 to do research and education and nurture Puerto Rico’s nascent photonics industry. One of us (Friedman) had been a research scientist at the Arecibo Observatory for nearly two decades before joining the UMET faculty to launch PRPI. Díaz had moved to Puerto Rico from Penn State University in 2010, initially working at AT&T and then joining UMET in 2014.
PRPI is the only photonics institute in the Caribbean. The program that Marcos enrolled in trains students to become technicians. In addition to taking courses and doing lab work, they are placed in paid internships with companies in Puerto Rico. We believe innovative initiatives like PRPI could and should play a key role in energizing Puerto Rico’s torpid economy.
Hurricane Maria was certainly a setback. The institute’s classes are held on the main campus in the city of Cupey, southeast of San Juan. Thankfully, this area avoided significant storm damage compared to the rest of the island, although it had no power, water, and communications for weeks.
The PRPI laboratories, which are located in the Barceloneta Science Park, west of San Juan, didn’t have power or water for 6 months. As a workaround, we used shared lab space on the main campus. Each week, the two of us would drive to Barceloneta, enter the unlit, un-air-conditioned building, and using flashlights, collect whatever equipment and supplies we needed for that week’s lab exercises, returning the equipment we’d already used.
Beyond those immediate hardships, enrollment in our program and in UMET as a whole has suffered. The university now faces deep and difficult budget decisions precipitated by the storm, including laying off faculty and staff and closing academic departments. PRPI’s staff will soon drop from six to two, just as a new crop of students seeking associate degrees start their studies and our sophomores begin their second year.
The good news is that PRPI remains open and committed to rebuilding our community. Marcos Santini managed to contact UMET on 15 October, just three and a half weeks after the hurricane. A week later he started classes again, through one-on-one tutorials with Díaz. Marcos also began an internship at Critical Hub Networks, where he helped re-establish fiber-optic networks in San Juan. The company was so pleased with Marcos’ dedication that he was hired full-time at the end of his internship.
By April, our labs in Barceloneta had reopened. Marcos and another student in the certificate program had to work with equipment that had weathered six months in the heat and humidity, but they succeeded in learning how to use a laser engraver, optical dimensional metrology systems, and optical thin-film coating apparatus.
Even as he was finishing up his academic degree and working full-time, Marcos also continued to care for his son, who had to be hospitalized for a month due to health problems triggered by the hurricane. Despite his many responsibilities, Marcos finished his certificate in May and is continuing with his associate degree this fall. He’s an inspiration to anyone who must overcome adversity in pursuit of their goals.
Jonathan S. Friedman is director of the Puerto Rico Photonics Institute at the Universidad Metropolitana. Andrés Díaz is PRPI’s academic coordinator.
When the former president of Google China talks about artificial intelligence and its potential to cause global upheaval, people listen. His hope is that enough people will listen to avert catastrophic disruption on three different scales: to the global balance of power, to national economies, and to human beings’ delicate souls.
Kai-Fu Lee has been fascinated by AI since he was an eager computer science student applying to Carnegie Mellon University’s Ph.D. program; his admission essay extolled the promise of AI, which he called “the quantification of the human thinking process.” His studies led him to executive positions in Apple, Microsoft, and Google China, before his 2009 founding of Sinovation Ventures, a venture-capital firm focusing on high-tech companies in China.
His new book, AI Superpowers: China, Silicon Valley, and the New World Order (Houghton Mifflin Harcourt), is something of a bait and switch. The first half explores the diverging AI capabilities of China and the United States and frames the discussion as a battle for global dominance. Then, he boldly declares that we shouldn’t waste time worrying about who will win and says the “real AI crisis” will come from automation that wipes out whole job sectors, reshaping economies and societies in both nations.
“Lurking beneath this social and economic turmoil will be a psychological struggle,” he writes. “As more and more people see themselves displaced by machines, they will be forced to answer a far deeper question: In an age of intelligent machines, what does it mean to be human?”
In a wide-ranging Q&A with IEEE Spectrum, Lee not only explored this question further, he also gave his answer.
Kai-Fu Lee on . . .
IEEE Spectrum: Why do you believe that China will soon match or even overtake the United States in developing and deploying AI?
Kai-Fu Lee: The first and foremost reason is that we’ve transitioned out of an era of discovery—when the person who makes the discovery has a huge edge—and into an era of implementation. The algorithms for AI are pretty well known to many practitioners. What matters now is speed, execution, capital, and access to a large amount of data. In each of these areas, China has an edge.
That’s why I began the book by talking about China’s entrepreneurism. It’s not like Silicon Valley, which is built on iPhone breakthroughs and SpaceX innovations, it’s built on incredibly hard work. Chinese entrepreneurs find areas where there’s enough data and a commercially viable application of AI, and then they work really hard to make the application work. It’s often very hard, dirty, ugly work. The data isn’t handed to you on a silver platter.
Spectrum: You say that Chinese tech giants like Tencent have a clear advantage in terms of access to data that’s needed to train AI. Do they really have more data than companies like Google?
Lee: There are a few ways to look at the data advantage. The first is how many users you have. Google probably has more users than Tencent, because it’s international. The second question is: How homogenous is your data set? Google’s data from Estonia may not help its work in India. It may be better to have rich data from one set of people who have the same language, culture, preferences, usage patterns, payment methods, and so on.
The third way to measure is how much data you have about each person. Tencent has a catch-all app, WeChat, that does basically everything. The average Chinese Internet user spends half of his or her time online in WeChat. When you open WeChat, you have access to everything U.S. users get from Facebook, Twitter, iMessage, Uber, Expedia, Evite, Instagram, Skype, PayPal, GrubHub, LimeBike, WebMD, Fandango, YouTube, Amazon, and eBay.
Spectrum: You describe China’s startup ecosystem as a brutal “coliseum” where companies don’t win because they’re the most innovative, but rather because they’re the best at copying, using dirty tricks, and working insane schedules.
Lee: There is creativity, but it’s just one tool. Another is copying. Entrepreneurs do whatever it takes to win, to build value for the user, and to make money. If you look at WeChat, you can’t point to one moment when it shocked the world like an iPhone. WeChat today is an amazing innovation, but it didn’t come about because someone at Tencent dreamed it up and built it and shocked the world. They kept layering on features that users wanted, they iterated, they threw away the features that didn’t work, and at the end they had a product that was the most innovative social network. It’s so good that Facebook is now copying them.
Spectrum: You write that the big AI question isn’t whether China or the United States will dominate. Instead it’s how we’ll deal with the “real AI crisis” of job losses, wealth inequality, and people’s sense of self-worth.
Lee: AI will take many single-task, single-domain jobs away. You can argue that humans have abilities that AI does not: We can conceptualize, strategize, create. Whereas today’s AI is just a really smart pattern recognizer that can take in data, optimize, and beat humans at a given task. But how many jobs in the world are simple repetitions of tasks that can be optimized? How many jobs require no creativity, strategizing, conceptualization? Most jobs are repetitive: truck-driving, telemarketing, dishwashing, fruit picking, assembly-line work, and so on. I’m afraid that about 50 percent of jobs in the world are in danger.
Whether these jobs will disappear in 15 years or 20 or 30, that’s debatable. But it’s inevitable. Not only can AI do a better job, it can do the job for almost marginal cost. Once you get the system up and running you just pay for the server, electricity, bandwidth. To be competitive, companies will be forced to automate. And this shift will happen a lot faster than has ever happened before in the history of humanity.
Spectrum: Why do you think “techno-utopians” have it wrong when they say that AI will ultimately create entirely new categories of jobs, just like the industrial revolution?
Lee: People say that in the industrial revolution, more jobs were created than destroyed. They say it was the same with electricity, and that we shouldn’t worry, because the same thing will happen this time with AI. I would agree with that if we had enough time. Those earlier technological revolutions took a century or longer. Electricity has been around for over 100 years, and we still don’t have electric cars, we’re still working on the grid. That gave people time to grow, and develop, and invent new jobs. But we have basically one generation with AI, and that’s a lot less time.
Spectrum: You argue that even if governments figure out a way to distribute money to all these jobless people, it will still be a crisis.
Lee: Most people don’t think of their job just as a source of income. It brings meaning to their life, it’s their contribution to the world. That’s how we decided to structure our capitalistic society: There’s the idea that even by working routine jobs, they can make money and make better lives for their families. If we pull the rug out from under them and say, you have no job, but here’s some money from the government, I think that would lead to bad outcomes. Some would be happy and retire early. Some will learn a new skill and get a new job, but unfortunately many will learn the wrong job, and get displaced again. A large number of people will be depressed. They will feel that life has no meaning, and this can result in suicide, substance abuse, and so on.
Spectrum: If this kind of economic and societal upheaval is an inevitable consequence of AI, is there any chance that we’ll decide to turn away from the technology, and decide not to use it?
Lee: Individual governments can make certain decisions to slow down the deployment of AI. But for humanity as a whole, it’s not possible. We’ve opened Pandora’s box. We did, as humans, control the proliferation of nuclear weapons, but that technology was secret and required a huge amount of capital investment. In AI, the algorithms are well known to many people, and it’s not possible to forbid people to use them. College students are using them to start companies.
Take autonomous trucks as an example. While China is building cities and highways to facilitate autonomous trucks, the American trucking union is appealing to President Trump to forbid testing on highways. If the U.S. is currently ahead on autonomous trucks, but chooses to slow down the development for fear of taking away jobs from truck-drivers, the only outcome is that China will catch up. Chinese companies will test the trucks, get the data, that data will make the AI better, at some point the technology will be so good that China will export it to the rest of the world. At that point, the U.S. will still have to give in to automation.
Spectrum: You call this a grim picture, but then go on to say that there’s hope, and that the potential for human flourishing has never been greater. You had your own grim experience that led to your own flourishing. Can you talk about your cancer experience?
Lee: I had been a workaholic for my whole life, and I always put work as my top priority. It was only when I faced cancer and possible death that I realized that no amount of money, success, or fame could substitute for the love I have from other people. And I felt great regret for not giving back what they gave me. That was a wake-up call.
After I became better and my cancer was in remission, I changed my life. It’s not that I don’t work hard anymore, I still do. But I prioritize differently. I prioritize family, I found a much better balance. I realized that the optimization that I used to do at work brought me money, success, and fame, but those are not the things that really matter to me—although I once thought they were.
Spectrum: So that experience led to your idea for what you call a blueprint for coexistence with AI, in which the use of AI gives people more time to love each other. You write that we must “forge a synergy between artificial intelligence and the human heart.” Can you give me an example of how this synergy might manifest in the job market?
Lee: One of my favorite examples is in medicine. Imagine a future clinic, in which the room is enhanced with all sorts of sensors that take readings of your body and give the human doctor lots of information. The doctor will help tease out information like family history and specific symptoms.
AI can make a great diagnosis and suggest the treatment, the prescription, and so on. In early days, AI will give statistics and the doctor will make the final choice. But in time, it will be very rare that the doctor overrides the system. So the AI will do the diagnosis, then the doctor will deliver the message in a way that feels caring and warm. The doctor will also let the patient tell his or her story. In many countries, each person gets only five minutes of doctor’s time—but while the doctor may only need five minutes, the patient needs much more to feel heard, to ask questions, and to be reassured.
If each doctor spends more time with each patient, there will need to be more doctors. Maybe they don’t need to be full MDs with 10 years of education and internship, since they no longer need to memorize all the symptoms and treatments. Instead they could get four years of education to become caring, compassionate caregivers, similar to nurse practitioners. Then the cost of health care will come way down, people will get better care, and the number of caregivers will go way up.
Cryptographic protection of sensitive information is arguably facing its most severe challenge to date thanks to quantum computers. To counter this threat, researchers around the globe are investigating new ways to protect secret keys used to send and unlock encrypted data. One advanced method close to commercialization is quantum key distribution (QKD).
QKD employs a feature of quantum mechanics known as the uncertainty principle to ensure transmitted key data cannot be interfered with by an outside party without irreversibly altering the data. Any interference will leave its mark and be detected by the sender and receiver.
Toshiba is leading in high-speed QKD and has been holding field trials in Japan and the United Kingdom for several years. Recently, Toshiba and Tohoku Medical Megabank Organization (ToMMo) at Tohoku University announced that they have achieved, for the first time, an average key distribution speed greater than 10 megabits per second over a one-month period. This is roughly five times as fast as the previous fastest QKD speed of 1.9 Mbps established by Toshiba Research Europe in 2016.
In this scheme, a QKD transmitter modulates a photon’s phase to randomly represent a zero or one. Modulated photons are transmitted to the QKD receiver. Based on the received photons, secure keys are generated at both ends. The keys are then fed into a one-time pad algorithm to encrypt and decrypt all other transmitted data. This combination of one-time pad and QKD ensures the transmitted data is fundamentally safe and secure from any known method of attack.
The speed gains are a result of Toshiba and Toshiba Research Europe using high-speed photon detectors and control electronics to register the signals, as well as an improved method for processing the signals into secure key data. Error correction and privacy amplification, until now a bottleneck in the system, have also been enhanced, which has greatly improved the system’s postprocessing speed.
“The trial was conducted over 7-kilometer standard telecommunications single-mode fiber-optic lines connecting the two sites,” says Yoshimichi Tanizawa, senior research scientist at Toshiba’s Corporate Research & Development Center’s Network Systems Laboratory in Kawasaki, near Tokyo. “As the trial was conducted in a practical environment, it is an important step toward high-speed QKD commercialization.”
Toshiba says it has previously conducted successful field trials in the Tokyo area with single-span fiber-optic lines as long as 45 km, while lab tests in the U.K. have reached a distance of 240 km.
Moreover, Toshiba announced in May that it has devised and is testing a new protocol called Twin-Field QKD that will extend the distribution distance to over 500 km of optical fiber. Whereas single photons are sent from one end of the fiber to the other in conventional QKD, with the Twin-Field protocol, photons are sent from both ends to a central location for detection. This effectively doubles the transmission distance.
Commenting on the trials, Alan Woodward, a computer scientist at the University of Surrey, in England, says, “Toshiba’s announcement is notable not so much for the speed achieved, as the fact that it appears to have been done over an extended period over already-installed fibers.”
He added, however, that QKD’s widespread take-up will likely “depend not just on using existing installed fibers, but when you can multiplex QKD on fibers in some way with the data it’s there to protect.”
In the Tohoku trial, Toshiba and ToMMo used separate fiber lines for the content and the key.
In addition to running these tests, Toshiba and ToMMo operated a wireless sensor network to continuously monitor the installed fiber optic lines using multisensor devices incorporating accelerometers and temperature sensors.
The aim was to study how the fiber’s characteristics change with shifts in the weather and nearby vibrations, and how such changes impact the performance of the high-speed QKD. Such an understanding is crucial when the technology is applied to existing communication installations, especially where the fiber is aerial and exposed to the elements.
“The monitoring has confirmed the correlation between the stability of the high-speed QKD system and disturbances to the installed fiber,” says Tanizawa. “For example, we found wind-induced vibrations in the fiber affected stability. We are now working on improving the stability of the QKD system.”
One challenge that must still be overcome is standardizing features of the system. “For instance,” says Tanizawa, “we need to standardize the interface to deliver secure keys to any application, including those in health care, finance, and telecommunications.”
And while he acknowledges more field testing is required to fine-tune the system, he says, “We assume it will be ready for commercialization in 2020.”
Commenting on the growing competition to introduce QKD, Woodward said, “I’m not sure it’s about competing QKD systems but more about whether QKD will be seen as enough of a security advantage over postquantum crypto schemes to warrant the expense of the infrastructure.”
Toshiba is not the only company developing QKD. Telefonica, the Spanish multinational telecommunications provider, working with China’s Huawei and Spain’s Universidad Politecnica de Madrid (UPM), announced this May they had performed QKD field trials using commercial optical networks.
The Telefonica approach employs software-defined networking technology that enables the network to be centrally and intelligently controlled. Vicente Martin Ayuso, head of the Center for Computational Simulation at UPM, said at the announcement, “Now we have, for the first time, the capability to deploy quantum communications in an incremental way, avoiding large upfront costs and using the same infrastructure.”
And as for the QKD’s reputation for being uncrackable, Woodward added, “Nothing is unbreakable. Security is always about the weakest link. Many cryptographic schemes have been considered strong, but then some weakness in the implementation allows attackers to circumvent the security. In constantly talking about QKD as absolutely secure, a false sense of security is being established.”
During October, IEEE members around the world celebrate IEEE Day, commemorating the anniversary of the first technical meeting of the American Institute of Electrical Engineers, the society that eventually merged with the Institute of Radio Engineers in 1963 to become IEEE.
IEEE is renowned for its conferences, publications, and standards. But IEEE’s members—each and every one of its members—are the key, the secret ingredient, to its continued success as a world-class global technology association.
Why do people join IEEE? As the editor of IEEE Spectrum, I’ve had the pleasure of talking to many members about what they do and why they choose to stay close to IEEE. Sometimes these member stories come from unexpected sources.
I once went to a new dentist and found, to my surprise, copies of Spectrum in his waiting room. Why? Before he turned to oral surgery, he’d taken an undergraduate EE degree at what was then the Polytechnic Institute of Brooklyn, where he became a student member of IEEE. A big fan of Spectrum, he’d kept his membership going long after he became an oral surgeon. He liked keeping up with new technical developments, and I think he also had some lingering “what ifs”—what if he’d pursued his interest in chips and computer programming instead of implant technology and imaging?
Other professionals become and remain members for more directly practical reasons—to network with their colleagues or meet new ones, to stay current with rapidly changing developments in technology, to publish in IEEE’s many important journals and transactions, to develop “soft skills” like people management and résumé writing.
For me, the networking component is very important. In an age when you can go online and idly scroll through your phone to research the most ridiculously complicated subjects, still nothing beats a conversation with a fellow carbon life-form to learn something new—while making a new friend or two along the way. Vint Cerf—one of the “fathers” of the Internet, Google evangelist, and IEEE Fellow—says, “It’s the interaction among people, the side conversations, and the chatting in front of a whiteboard that makes IEEE so valuable.”
But, as is the case with most things in life, you get back what you put into whatever group or community you belong to—your school, your company, your family and friends, your IEEE chapter, section, or region.
Are you an active IEEE member? Do you attend meetings, volunteer? If so, we’d like to hear from you. Why did you become a member? Do you come from a long line of members? How has being a member helped you? Be sure to include your name and member grade when you send me your story. Or post your comments online.
We are 374,778 members strong, in 160 countries, with 3,005 student branches and over 2,000 chapters uniting local members. Our members are engineers, scientists, and allied professionals whose technical interests are rooted in electrical and computer sciences, engineering, and related disciplines. Happy IEEE Day!
To date, radio astronomers have cataloged fewer than 300 fast radio bursts, mysterious broadband radio signals that originate from well beyond the Milky Way. Almost a third of them—72, to be precise—were not detected by astronomers at all but instead were recently discovered by an artificial intelligence (AI) program trained to spot their telltale signals, even hidden underneath noisy background data.
The very first recorded fast radio burst, or FRB, was spotted by radio astronomers in 2007, nestled in data from 2001. Today, algorithms spot FRBs by sifting through massive amounts of data as it comes in. However, today’s best algorithms still can’t detect every FRB that reaches Earth.
That’s why AI developed by Breakthrough Listen, a SETI project headed by the University of California, Berkeley, which has already found dozens of new bursts in its trial run, will be a big help in future searches. “This new AI will allow us to pick up signals not picked up by traditional algorithms,” says Gerry Zhang, a graduate student at the Berkeley SETI Research Center.
There are a few theories about what FRBs might be. The prevailing theory is that they’re created by rapidly rotating neutron stars. In other theories, they emanate from supermassive black holes. Even more out-there theories describe how they’re produced when neutron stars collide with stars composed of hypothetical dark matter particles called axions. The bursts are probably not sent by aliens, but that theory has its supporters, too.
What we do know is that FRBs come from deep space and each burst lasts for only a few milliseconds. Traditionally, algorithms tease them out of the data by identifying the quadratic signals associated with FRBs. But these signals are coming from far-flung galaxies. “Because these pulses travel so far, there are plenty of complications en route,” says Zhang. Pulses can be distorted and warped along the way. And even when one reaches Earth, our own noisy planet can obfuscate a pulse.
That’s why it makes sense to train an AI—specifically, a convolutional neural network—to poke through the data and find the ones that traditional algorithms missed. “In radio astronomy,” says Zhang, “at least nowadays, it’s characterized by big data.” Case in point: The 72 FRBs identified by the Berkeley team’s AI were found in 8 terabytes of data gathered by the Green Bank Telescope in West Virginia.
To even give the AI enough information to learn how to spot those signals in the first place, Zhang says the team generated about 100,000 fake FRB pulses. The simple quadratic structure of FRBs makes it fairly easy to construct fake pulses for training, according to Zhang. Then, they disguised these signals among the Green Bank Telescope data.
As the team explains in their paper, accepted by The Astrophysical Journal with a preprint available on arXiv, it took 20 hours to train the AI with those fake pulses using a Nvidia Titan Xp GPU. By the end, the AI could detect 88 percent of the fake test signals. Furthermore, 98 percent of the identifications that the AI made were actually planted signals, as opposed to the machine mistakenly identifying background noise as an FRB pulse.
And at the end of it all, the AI identified 72 new pulses, while a traditional algorithm that had previously combed the data had only found 21. Interestingly enough, all 93 pulses came from FRB 121102, an FRB source somewhere in a dwarf galaxy located 3 billion light-years away and a space oddity among FRB sources because it repeats—nearly all other FRB pulses are one-off events. It’s unclear why FRB 121102 is a repeating source of pulses.
AI’s applications are growing clearer in the world of radio astronomy as the field is flooded with more and more data. Still, it’s not simply a matter of plugging this AI—which, again, was trained on data solely from the Green Bank Telescope—into data from a new telescope to hunt for more FRB pulses. Zhang says the Breakthrough Listen team is just starting to look into the challenge of training a machine on one telescope and adapting it to another telescope. Apparently, each telescope has its own quirks.
Breakthrough Listen is also hunting for signals of extraterrestrial intelligence, which, if those signals are out there, could be just as rare and fleeting as FRB pulses. Using AI to find ET will require training machines to self-supervise their own learning and detect likely anomalies worthy of further scrutiny. “Unlike fast radio bursts,” says Zhang, “we don’t know exactly what we’re looking for. We’re looking for something strange in the signal.”
This article was updated on 19 September 2018 because it originally stated that most FRBs are not spotted in real time. Today, the majority of FRBs are noticed by algorithms checking data as it is received.
Amidst what could be California’s worst wildfire season on record, San Diego Gas & Electric is counting on technology to reduce dangerous sparking from its power lines. This month, the utility completed the initial rollout of a home-grown automated control technology that taps ultrafast synchrophasor sensors to detect and turn off broken power lines before they hit the ground.
Projects such as this mark a turning point for grid control. Synchrophasor sensors send out time-stamped measurements of power and its phase—the angular position of the alternating current and voltage waves—up to 60 times per second. That is at least 120 times as fast as most utilities’ industrial control systems. And the GPS-synchronized time stamps allow data assembled from multiple sensors to create a precise wide-area view of power grids.
The grid’s human operators have progressively attained a wider view since the synchrophasor device’s invention 30 years ago. But only recently have they begun to exploit the speed of these phasor measurement units (PMUs) for real-time grid control.
San Diego’s line-break-protection system works by spotting quick voltage changes. PMUs arrayed along a circuit report continuously via a high-speed wireless radio communications network to a controller in a substation. If the controller spots a sudden voltage spread between adjacent sensors, it orders the closest relays to isolate and de-energize the iffy segment. Generally, it’s all over in less than half a second.
San Diego Gas & Electric and its parent company, Sempra Energy, started looking at synchrophasor sensors in 2010 and quickly identified dozens of potential uses. A broken-line-detection and control system became the utility’s flagship project after engineer William O’Brien calculated that it could spot broken lines two to three times as fast as gravity could pull them down, allowing the controller to stop the flow of electricity before a line touched the ground, and thus greatly reduce the risk of fire. (O’Brien developed and patented the concept with Eric Udren, an executive adviser at Quanta Technology, a consultancy based in Raleigh, N.C.)
This month, the system had been installed in what the utility expects will be its final form on six circuits emanating from three substations in the fire‑prone territory east of San Diego. The utility has 18 more substation build-outs planned and expects to ultimately deploy the system across its entire grid.
This system for detecting and disarming broken lines marks the first deployment of PMU-based automation on a distribution system. But a few utilities elsewhere have already integrated synchrophasor-based controls into their high-voltage transmission grids. One of the first installations, initially completed in Iceland in 2014 and substantially upgraded last year, tunes the island nation’s 50-hertz AC frequency.
Iceland has a relatively small grid whose supply and demand can easily be thrown out of balance when power plants, transmission lines, or big factories unexpectedly go off line. For years, the resulting AC frequency fluctuations regularly caused the grid’s eastern and western zones to split into electrical islands, which often led to power outages.
A wide-area PMU network and added controls, provided by GE’s grid solutions business, enabled Iceland’s grid operator, Landsnet, to rapidly locate power imbalances and automatically fix them by tweaking demand from aluminum smelters and other big consumers. The June 2017 updates have cut the magnitude of Iceland’s AC frequency deviations roughly in half, according to GE senior power systems engineer Sean Norris. “Events that we previously would have expected to cause splits in the system have occurred, and the system has remained intact,” says Norris.
Emerging frequency challenges for Great Britain’s much-larger grid have prompted a three-year research effort directed by London-based National Grid. England and Scotland’s many fossil-fueled power plants and the inertia in their heavy rotating generators currently hold the United Kingdom’s AC frequency steady. But that frequency-stabilizing inertia is disappearing as coal and gas plants shutter.
Simulations conducted earlier this year at the University of Strathclyde, in Glasgow, showed that synchrophasor-driven controls, running on an expanded version of GE’s technology, could keep the U.K. grid stable with fewer inertia-rich generators. By early next year, the research team hopes to begin testing its control platforms at National Grid substations.
Ultimately such real-time controls will take over grid operation, according to Patrick Lee, president of control developer PXiSE Energy Solutions (another Sempra Energy subsidiary). As renewable generation grows, industrial control systems aided by human operators watching PMU readings will no longer suffice. According to Lee, “As the system gets more renewable integration and becomes more dynamic, you have less time to respond. If you don’t have this high-speed synchrophasor-based technology, you really will have no chance.”
This article appears in the October 2018 print issue as “Utilities Roll Out Real-Time Grid Controls.”
In early 2010, Harvard economists Carmen Reinhart and Kenneth Rogoff published an analysis of economic data from many countries and concluded that when debt levels exceed 90 percent of gross national product, a nation’s economic growth is threatened. With debt that high, expect growth to become negative, they argued.
This analysis was done shortly after the 2008 recession, so it had enormous relevance to policymakers, many of whom were promoting high levels of debt spending in the interest of stimulating their nations’ economies. At the same time, conservative politicians, such as Olli Rehn, then an EU commissioner, and U.S. congressman Paul Ryan, used Reinhart and Rogoff’s findings to argue for fiscal austerity.
Three years later, Thomas Herndon, a graduate student at the University of Massachusetts, discovered an error in the Excel spreadsheet that Reinhart and Rogoff had used to make their calculations. The significance of the blunder was enormous: When the analysis was done properly, Herndon showed, debt levels in excess of 90 percent were associated with average growth of positive 2.2 percent, not the negative 0.1 percent that Reinhart and Rogoff had found.
Herndon could easily test the Harvard economists’ conclusions because the software that they had used to calculate their results—Microsoft Excel—was readily available. But what about much older findings for which the software originally used is hard to come by?
You might think that the solution—preserving the relevant software for future researchers to use—should be no big deal. After all, software is nothing more than a bunch of files, and those files are easy enough to store on a hard drive or on tape in digital format. For some software at least, the all-important source code could even be duplicated on paper, avoiding the possibility that whatever digital medium it’s written to could become obsolete.
Saving old programs in this way is done routinely, even for decades-old software. You can find online, for example, a full program listing for the Apollo Guidance Computer—code that took astronauts to the moon during the 1960s. It was transcribed from a paper copy and uploaded to GitHub in 2016.
While perusing such vintage source code might delight hard-core programmers, most people aren’t interested in such things. What they want to do is use the software. But keeping software in ready-to-run form over long periods of time is enormously difficult, because to be able to run most old code, you need both an old computer and an old operating system.
You might have faced this challenge yourself, perhaps while trying to play a computer game from your youth. But being unable to run an old program can have much more serious repercussions, particularly for scientific and technical research.
Along with economists, many other researchers, including physicists, chemists, biologists, and engineers, routinely use software to slice and dice their data and visualize the results of their analyses. They simulate phenomena with computer models that are written in a variety of programming languages and that use a wide range of supporting software libraries and reference data sets. Such investigations and the software on which they are based are central to the discovery and reporting of new research results.
Imagine that you’re an investigator and want to check calculations done by another researcher 25 years ago. Would the relevant software still be around? The company that made it may have disappeared. Even if a contemporary version of the software exists, will it still accept the format of the original data? Will the calculations be identical in every respect—for example, in the handling of rounding errors—to those obtained using a computer of a generation ago? Probably not.
Researchers’ growing dependence on computers and the difficulty they encounter when attempting to run old software are hampering their ability to check published results. The problem of obsolescent software is thus eroding the very premise of reproducibility—which is, after all, the bedrock of science.
The issue also affects matters that could be subject to litigation. Suppose, for example, that an engineer’s calculations show that a building design is robust, but the roof of that building nevertheless collapses. Did the engineer make a mistake, or was the software used for the calculations faulty? It would be hard to know years later if the software could no longer be run.
That’s why my colleagues and I at Carnegie Mellon University, in Pittsburgh, have been developing ways to archive programs in forms that can be run easily today and into the future. My fellow computer scientists Benjamin Gilbert and Jan Harkes did most of the required coding. But the collaboration has also involved software archivist Daniel Ryan and librarians Gloriana St. Clair, Erika Linke, and Keith Webster, who naturally have a keen interest in properly preserving this slice of modern culture.
Because this project is more one of archival preservation than mainstream computer science, we garnered financial support for it not from the usual government funding agencies for computer science but from the Alfred P. Sloan Foundation and the Institute for Museum and Library Services. With that support, we showed how to reconstitute long-gone computing environments and make them available online so that any computer user can, in essence, go back in time with just a click of the mouse.
We created a system called Olive—an acronym for Open Library of Images for Virtualized Execution. Olive delivers over the Internet an experience that in every way matches what you would have obtained by running an application, operating system, and computer from the past. So once you install Olive, you can interact with some very old software as if it were brand new. Think of it as a Wayback Machine for executable content.
To understand how Olive can bring old computing environments back to life, you have to dig through quite a few layers of software abstraction. At the very bottom is the common base of much of today’s computer technology: a standard desktop or laptop endowed with one or more x86 microprocessors. On that computer, we run the Linux operating system, which forms the second layer in Olive’s stack of technology.
Sitting immediately above the operating system is software written in my lab called VMNetX, for Virtual Machine Network Execution. A virtual machine is a computing environment that mimics one kind of computer using software running on a different kind of computer. VMNetX is special in that it allows virtual machines to be stored on a central server and then executed on demand by a remote system. The advantage of this arrangement is that your computer doesn’t need to download the virtual machine’s entire disk and memory state from the server before running that virtual machine. Instead, the information stored on disk and in memory is retrieved in chunks as needed by the next layer up: the virtual-machine monitor (also called a hypervisor), which can keep several virtual machines going at once.
Each one of those virtual machines runs a hardware emulator, which is the next layer in the Olive stack. That emulator presents the illusion of being a now-obsolete computer—for example, an old Macintosh Quadra with its 1990s-era Motorola 68040 CPU. (The emulation layer can be omitted if the archived software you want to explore runs on an x86-based computer.)
The next layer up is the old operating system needed for the archived software to work. That operating system has access to a virtual disk, which mimics actual disk storage, providing what looks like the usual file system to still-higher components in this great layer cake of software abstraction.
Above the old operating system is the archived program itself. This may represent the very top of the heap, or there could be an additional layer, consisting of data that must be fed to the archived application to get it to do what you want.
The upper layers of Olive are specific to particular archived applications and are stored on a central server. The lower layers are installed on the user’s own computer in the form of the Olive client software package. When you launch an archived application, the Olive client fetches parts of the relevant upper layers as needed from the central server.
That’s what you’ll find under the hood. But what can Olive do? Today, Olive consists of 17 different virtual machines that can run a variety of operating systems and applications. The choice of what to include in that set was driven by a mix of curiosity, availability, and personal interests. For example, one member of our team fondly remembered playing The Oregon Trail when he was in school in the early 1990s. That led us to acquire an old Mac version of the game and to get it running again through Olive. Once word of that accomplishment got out, many people started approaching us to see if we could resurrect their favorite software from the past.
The oldest application we’ve revived is Mystery House, a graphics-enabled game from the early 1980s for the Apple II computer. Another program is NCSA Mosaic, which people of a certain age might remember as the browser that introduced them to the wonders of the World Wide Web.
Olive provides a version of Mosaic that was written in 1993 for Apple’s Macintosh System 7.5 operating system. That operating system runs on an emulation of the Motorola 68040 CPU, which in turn is created by software running on an actual x86-based computer that runs Linux. In spite of all this virtualization, performance is pretty good, because modern computers are so much faster than the original Apple hardware.
Pointing Olive’s reconstituted Mosaic browser at today’s Web is instructive: Because Mosaic predates Web technologies such as JavaScript, HTTP 1.1, Cascading Style Sheets, and HTML 5, it is unable to render most sites. But you can have some fun tracking down websites composed so long ago that they still look just fine.
What else can Olive do? Maybe you’re wondering what tools businesses were using shortly after Intel introduced the Pentium processor. Olive can help with that, too. Just fire up Microsoft Office 4.3 from 1994 (which thankfully predates the annoying automated office assistant “Clippy”).
Perhaps you just want to spend a nostalgic evening playing Doom for DOS—or trying to understand what made such first-person shooter games so popular in the early 1990s. Or maybe you need to redo your 1997 taxes and can’t find the disk for that year’s version of TurboTax in your attic. Have no fear: Olive has you covered.
On the more serious side, Olive includes Chaste 3.1. The name of this software is short for Cancer, Heart and Soft Tissue Environment. It’s a simulation package developed at the University of Oxford for computationally demanding problems in biology and physiology. Version 3.1 of Chaste was tied to a research paper published in March 2013. Within two years of publication, though, the source code for Chaste 3.1 no longer compiled on new Linux releases. That’s emblematic of the challenge to scientific reproducibility Olive was designed to address.
To keep Chaste 3.1 working, Olive provides a Linux environment that’s frozen in time. Olive’s re-creation of Chaste also contains the example data that was published with the 2013 paper. Running the data through Chaste produces visualizations of certain muscle functions. Future physiology researchers who wish to explore those visualizations or make modifications to the published software will be able to use Olive to edit the code on the virtual machine and then run it.
For now, though, Olive is available only to a limited group of users. Because of software-licensing restrictions, Olive’s collection of vintage software is currently accessible only to people who have been collaborating on the project. The relevant companies will need to give permissions to present Olive’s re-creations to broader audiences.
We are not alone in our quest to keep old software alive. For example, the Internet Archive is preserving thousands of old programs using an emulation of MS-DOS that runs in the user’s browser. And a project being mounted at Yale, called EaaSI (Emulation as a Service Infrastructure), hopes to make available thousands of emulated software environments from the past. The scholars and librarians involved with the Software Preservation Network have been coordinating this and similar efforts. They are also working to address the copyright issues that arise when old software is kept running in this way.
Olive has come a long way, but it is still far from being a fully developed system. In addition to the problem of restrictive software licensing, various technical roadblocks remain.
One challenge is how to import new data to be processed by an old application. Right now, such data has to be entered manually, which is both laborious and error prone. Doing so also limits the amount of data that can be analyzed. Even if we were to add a mechanism to import data, the amount that could be saved would be limited to the size of the virtual machine’s virtual disk. That may not seem like a problem, but you have to remember that the file systems on older computers sometimes had what now seem like quaint limits on the amount of data they could store.
Another hurdle is how to emulate graphics processing units (GPUs). For a long while now, the scientific community has been leveraging the parallel-processing power of GPUs to speed up many sorts of calculations. To archive executable versions of software that takes advantage of GPUs, Olive would need to re-create virtual versions of those chips, a thorny task. That’s because GPU interfaces—what gets input to them and what they output—are not standardized.
Clearly there’s quite a bit of work to do before we can declare that we have solved the problem of archiving executable content. But Olive represents a good start at creating the kinds of systems that will be required to ensure that software from the past can live on to be explored, tested, and used long into the future.
This article appears in the October 2018 print issue as “Saving Software From Oblivion.”
Mahadev Satyanarayanan is a professor of computer science at Carnegie Mellon University, in Pittsburgh.
There is perhaps no more hotly pursued area in alternative energy than artificial photosynthesis, with research papers jumping from 11,000 in 2010 to 21,500 in 2017, according to some estimates. Artificial photosynthesis is used to either split water molecules into hydrogen and oxygen or reduce carbon dioxide. Last year, we visited the U.S. Department of Energy’s Joint Center for Artificial Photosynthesis (JCAP)—one of the leading labs in the world in this field. We saw that water-splitting research had reached a level of success that made the researchers look to the new challenge of carbon dioxide reduction.
In a significant development to this line of research, scientists in Japan have developed a photoelectrode using a titanium dioxide semiconductor layered between a gold film and gold nanoparticles. The device manages to absorb 85 percent of all visible light. In addition, the novel electrode’s photon-to-current conversion efficiency was 11 times as great as that of a device that did not use a gold nanofilm. The upshot: It could yield an extraordinarily efficient means for converting sunlight into renewable energy.
In research described in the journal Nature Nanotechnology, researchers at Hokkaido University in Sapporo, Japan, in collaboration with researchers at National Chiao Tung University, in Taiwan, found that simply adding gold nanoparticles on top of a semiconductor like titanium dioxide did not provide the amount of light absorption they were looking to achieve for their new electrode.
The trick to achieving such a large boost in light-absorbing efficiency was creating a sandwich of materials in which a 100-nanometer gold film and gold nanoparticles served as the outside “bread” layers to the titanium dioxide semiconductor in the middle. When light hit the gold nanoparticles on one side, the gold film on the other side acted like a mirror and trapped the light in a nanocavity so that the gold nanoparticles could continue to absorb more light.
The addition of the gold film was critical for creating the nanocavity. But the gold nanoparticles’ use of plasmonics, which exploits the waves of electrons that develop when photons strike a metal surface, was also invaluable.
When the surface plasmons, generated by the light hitting the gold nanoparticles, resonate at the same wavelength as the nanocavity, a strong coupling between the plasmons and the cavity occurs, leading to the high efficiency in converting light into current.
Hiroaki Misawa, the professor at Hokkaido University who led the research, concedes that it is difficult to translate the photoelectrode’s efficiency into straight-up comparisons with other water splitting systems.
“In this case, apparent absorption is increased if a lot of semiconductor particles are prepared in the system,” said Misawa. “On the other hand, a water splitting system using silicon solar cells has also been developed in which all visible light can be employed for the water splitting. Therefore, it is difficult to compare with certainty. Most importantly, in this study, it is possible to harvest 85 percent of visible light even with a metal oxide semiconductor electrode only 30-nm thick—a very small amount of materials.”
In future research, Misawa and his team will be exploring the mechanism of strong-coupling-enhanced, plasmon-induced charge formation and separation. They also intend to expand the application of the photoelectrode they developed to the other light-energy conversions, such as ammonia photosynthesis and solid-state solar cells.
If you’re looking for the perfect add-on to your megayacht, how about a personal submarine? Triton Submarines can set you up. The company, based in Vero Beach, Fla., specializes in high-end submersibles that can dive as far as 1,000 meters deep. Now, Triton has partnered with luxury carmaker Aston Martin, based in England, to build a limited-edition model. Due in early 2019, it combines ultimate style with hydrodynamic performance.
Over the past 12 years, Triton’s subs have earned a reputation for safety, maneuverability, and comfort. But back in 2008, when the company was founded, the idea of a personal submersible was a tough sell. Too many potential buyers had seen too many Hollywood action movies featuring doomed submarines, recalls Triton’s president, Patrick Lahey, who cofounded the company with CEO L. Bruce Jones.
“People thought [submarines] must be massively complicated and dangerous,” Lahey says. “I’ll forever be grateful to our first customer. Putting our sub on his vessel and having it displayed at boat shows really got the conversation started.” Today, Triton’s preorders and word-of-mouth recommendations continue to propel the firm’s growth.
Triton subs all feature spherical transparent cabins, which provide the widest possible window on ocean flora, fauna, and landforms while resisting the deep’s crushing pressures. Figuring out how to build the cabins took some doing. In 2011, production of Triton’s most popular model—the US $3.8 million three-person 3300/3—hit a wall when suppliers were unable to cast the 2.1-meter-diameter, 2.2-metric-ton acrylic bulb. “It actually threatened to take us out of business because we had a couple of orders that we couldn’t fill,” says Lahey. Triton turned to German acrylics pioneer Evonik Industries, which developed a more uniform thermal-forming process.
To shrink the profile of its subs, Triton engineers shifted from lead-acid batteries to fire-resistant lithium iron magnesium phosphate batteries. The lithium iron batteries pack 90 watt-hours per kilogram, more than double the energy density of lead batteries. The slimmed-down subs can be tucked inside a yacht’s hangar, rather than sitting on deck. The batteries’ fire-safe chemistry eased acceptance from international ship certification firms such as Norway’s DNV GL. Triton subjects all of its subs to rigorous independent certification, comparable to what commercial aircraft go through. “We don’t build experimental subs,” says Lahey.
Codeveloping a sub with Aston Martin is about maxing out styling, creature comforts, and performance. The new sub features a more powerful quartet of thrusters and streamlined hydrodynamics, which will propel the vessel at a relatively snappy 6 knots (11 kilometers per hour). The thrusters also offer greater control when navigating through coral reefs and other tricky terrain, and for holding steady in strong currents.
“Jacques Cousteau had a great saying: Speed is the enemy of observation,” says Lahey. “You don’t pull up to the Louvre and put on your running shoes and sprint through the place. You stop and take the time to drink it in.”
Bioluminescence expert Edith Widder says the maneuverability of Triton subs is already top-notch, likening it to flying a helicopter. In 2012 Widder, who’s CEO of the Florida-based Ocean Research & Conservation Association, was on a dive campaign in Japan that used a Triton 3300/3 and scored the first-ever sighting of a giant squid in its habitat.
Widder says privately owned subs fill a need when government funding isn’t forthcoming. “We’re back to the time of the Medicis, where scientists get access through wealthy people,” she says. These days, governments prefer to fund cheaper, deeper-diving unmanned underwater vehicles, but Widder says crewed submersibles remain unbeatable for venturing into the unknown. “When you know what you want to do, you can build a robot to do it. But when you’re exploring, there’s nothing more adaptable than a human.”
Soon Triton may be taking yachters and scientists deeper. Lahey says a new design will push well beyond Triton’s current 1,000-meter limit, with details to be revealed as early as mid-October.
This article appears in the October 2018 print issue as “Triton Submarines’ Dive Into Luxury.”
Earlier this year, Diligent Robotics introduced a mobile manipulator called Poli, designed to take over non-care related, boring logistical tasks from overworked healthcare professionals who really should be doing better things with their time. Specifically, Diligent wants to automate things like bringing supplies from a central storage area to patient rooms, which sounds like it should be easy, but is actually very difficult. Autonomous mobile manipulation in semi-structured environments is hard at the best of times, and things get even harder in places like hospitals that are full of busy humans rushing around trying to save the lives of other humans.
Over the past few months, Diligent has been busy iterating on the design of their robot, and they’ve made enough changes that it’s no longer called Poli. It’s a completely new robot, called Moxi.
As a friendly, sensitive, and intuitive robot, Moxi not only alleviates clinical staff of routine tasks but does so in a non-threatening and supportive way that encourages positive relationships between humans and robots, further enhancing clinical staff’s ability to and interest in leveraging AI in the healthcare industry. Created with a face to visually communicate social cues and able to show its intention before moving to the next task, Moxi is built to foster trust between patients and staff alike, setting the stage for future innovation and partnerships with developing technology. Moxi’s specific tasks and responsibilities at each hospital will be tailored to fit each hospital’s needs.
While Diligent’s general concept for a mobile manipulator for hospitals is the same as it’s always been, Moxi is much, much different than its predecessor, Poli, that we wrote about in January. Moxi uses a Freight mobile base from Fetch Robotics, which seems like a reasonable thing to do if your company is about manipulation and human-robot interaction (HRI) and you just want the navigation and obstacle avoidance to work without you having to stress about it. Moxi is significantly more human-like than earlier designs (with a pronounced head and torso), which presumably makes HRI more straightforward, although there’s that Velodyne Puck that almost looks like it was added as an afterthought. For manipulation, the robot relies on a Kinova Jaco arm and Adaptive Gripper from Robotiq.
The video shows some fairly standard mobile manipulator capabilities—navigation and obstacle avoidance, plus the ability to pick items out of bins and drop them into what looks like a little partition on the base. We’re told that this is fully autonomous, though I’m not totally clear on what happens at the other end of this process, when presumably the robot needs to pick things out of its storage area (can it see down there?) and place them on shelves or in bins or something. And not to be needlessly suspicious, but there’s only so much we can infer from videos like this, because they invariably show the best case scenarios of how robots operate. Having said that, it’s also important to keep things in context: We’re seeing a prototype during a pilot, after all. If it worked perfectly, it would be a commercial product already, right?
For more details on how Diligent is working on getting Moxi towards that idea of a perfect commercial product, we spoke with CEO and co-founder Andrea Thomaz, who is also a professor of electrical and computer engineering at the University of Texas at Austin, where she leads the Socially Intelligent Machines Lab.
IEEE Spectrum: We spoke with you about Diligent in January; what have you learned from your pilot customers since then about deploying mobile manipulators in hospitals?
Andrea Thomaz: Our priority from the beginning has been to make healthcare professionals enjoy and feel supported in their work again, so we were most nervous that idea wouldn’t be heard given that many people see AI as a professional threat rather than a tool. But the more research we’ve done and the more enthusiastic healthcare staff we’ve met, we feel even more confident in our vision for Moxi (and many other AI services) to improve the roles of clinical staff by taking away their many repetitive, logistical responsibilities so they can spend more time with patients.
Our early customer engagements have validated the need for a robot to be adaptable to each location, both in terms of the physical environment for navigation and manipulation and in terms of what specific support tasks will be most valuable for different hospital departments depending on the patient population, unit workflows, etc. One expectation we have confirmed is that that millimeter accuracy navigation isn’t solved. As a result, we are coming up with solutions that use vision to get the robot to repeatable millimeter manipulation accuracy.
Another thing we’ve learned with our pilots is that each unit and healthcare environment is different and as a result, a wide variety of sensing is required. Specifically, we’re experimenting with sensors that have different scale, resolution, and distances. We are making use of sensor fusion and multimodal perception in order to allow Moxi better perceive each semi-structured environment.
What iterations did you go through to get from Poli to Moxi? What changes did you make to the hardware and why were they necessary?
We’ve made quite a few changes from Poli to Moxi, all focused on our idea of making a robot that feels more natural and accepted by people experiencing it in their everyday environment. Our very user-centric design approach helped us understand the limitations of Poli so we could improve Moxi. For example, we liked the safety, form-factor, and range of manipulation of the Kinova Jaco2, however, the original orientation limited the workspace of the arm. This change also improved the aesthetics of Moxi.
Another change was designing it so Moxi’s entire torso rests on a telescoping pillar instead of only the arm mounted on a linear actuator with Poli. Both designs achieve the manipulation goal of being able to have that degree of freedom to expand the workspace, reaching high shelves and down to the floor, but having a telescoping pillar solution accomplishes aesthetic and social goals by allowing the robot have a smaller footprint as it is working near people. Also having the arm and head move in tandem is important for it to feel socially appropriate.
For our pilot testing phase of the product, we added a Velodyne Puck (VLP-16), which allows us to perform perception tasks that cannot be done with just an RGB-D sensor in the head. As we mentioned earlier, this is one of the sensors we are fusing to allow Moxi to perceive its environment. For closer range depth perception, we selected the Intel RealSense as a replacement to the Kinect sensor used on Poli. We did a series of benchmarking with a variety of RGB-D cameras and found this was the best in market for us right now.
We are using the Fetch F100 as our mobile base with Moxi. Our biggest constraint in a hospital environment is the footprint of the base, and after piloting the F100 we’ve found that it’s able to navigate all the spaces that we need.
The visual design for Moxi is much different than the design for Poli. Can you talk about some specific HRI-related design decisions that you made, and why you made them?
One of our biggest design goals with our vision for making Moxi more natural in human environments was to downsize Moxi from Poli’s size, so Moxi doesn’t feel obstructive in busy healthcare environments. Our second design goal was to make Moxi feel like it’s friendly, and an important part of the team. Through the brilliant work of our industrial designer, Carla Diana, we determined Moxi needed to have soft, unified, and rounded design features.
From an HRI perspective, we are always trying to find intuitive ways that the robot’s behavior can be transparent for the people around it. We decided for Moxi to demonstrate social intelligence and awareness in human healthcare environments, we needed to design various ambient communication tools it could use to proactively communicate its intentions to people, such as its moving LED arrays and head movements. Poli was a more non-anthropomorphic social robot and we decided to move specifically toward a more explicitly social face with Moxi to show the kind of socially communicative and positive contributor Moxi is on a team. There has already been an immediately positive and warm reaction people are having when meeting Moxi, which we’re thrilled to see as that type of immediate connection will be important for every clinical staff team to have when welcoming Moxi on board.
Can you describe what we’re seeing in the video?
We created this video to introduce Moxi and show its design, personality, and basic functionalities. We wanted to use it to demonstrate how its many pieces and qualities work together to bring to life not just a single product, but an idea: An idea that robots don’t have to be scary or isolated, but they can and should be friendly and functional members of a team.
Moxi is fully autonomous, so it completes end-to-end tasks—it doesn’t need assistance at either end of its action, its arm allows it to pick something up and drop it off without help. Regarding specific movements such as picking something up from a bin, Moxi uses a variety of sensors to orient itself and execute reliable grasps of objects, regardless of their arrangement. Additionally, Moxi has the ability to verify a failed grasp and retry if needed. All of this has culminated a system that is already quite successful at manipulating a wide range of tasks, especially those healthcare professionals are responsible for.
The robot can navigate around humans that are moving. We take advantage of the navigation stack developed by Fetch, that does a great job with obstacle avoidance and path planning. On top of this we are developing the social intelligence to avoid humans differently than objects, as well as ensuring paths the robot takes are transparent and legible to the people in the environment.
What kinds of non-patient facing, logistical tasks do you think Moxi will be most successful at in the near term? What are the trickiest problems that you’re still working on solving?
With our pilot customers, we are exploring a variety of support tasks related to making sure certain supplies are in particular locations at a given time. In the near term, the robot is going to be most successful at achieving the supply setup for care tasks that might be known several hours in advance, like that an admission is happening because someone is being discharged from surgery. This is as opposed to working in an emergency department where clinical care is happening on a much more unpredictable time scale, and supplies are needed with more immediacy.
Patient care workflows are very dynamic and are constantly changing throughout the day, so one of the challenges we are still working on solving is understanding how to best insert Moxi into these workflows.
Our current platform is explicitly developed with design iteration in mind. We have a lightweight arm and gripper, with a small form factor to be able to operate in the kinds of tight spaces we will be in. In this phase of the product we are testing the set of capabilities this current prototype can perform, as well as learning about any desired capabilities that are outside of the specs for this particular arm and gripper solution. All of these learnings with customers will inform our final product offering, and next revision of the product.
Thomaz tells us that Diligent is just starting Moxi’s pilot program, with the goal of figuring out “how Moxi can best support and work with each clinical team.” It sounds like Moxi is just another step in this process, and more iterations in software, hardware, and overall design will take place before Diligent finalizes their commercial robot. And it may take a little while—the only thing harder than mobile manipulation in semi-structured environments just might be autonomous interaction with humans, and Diligent will need to conquer both of those challenges if they want to end up with a useful robot that people like to work with.
This content is brought to you by TE Connectivity.
TE Connectivity is focused on creating a safer, sustainable, productive and connected future.
TE Connectivity is the go-to engineering partner for today's innovation leaders and technology entrepreneurs, we are helping solve tomorrow's toughest challenges with advanced connectivity and sensors solutions.
Learn how TE Connectivity is transforming technology to enable the connected car
Learn how Nikola Motor Company partnered with TE Connectivity to enable fully electric hydrogen powered long-haul trucks
In March, a ProPublica and Mother Jones report put the spotlight on years of reports by laid-off IBM employees that they had been targeted due to their age. In May, the U.S. Equal Employment Opportunity Commission began a nationwide investigation into age discrimination complaints against the company. Also in May, Jonathan Langley, an Austin-based IBM employee, filed a lawsuit charging age discrimination in his firing.
And now, a lawyer—who famously sued Uber for allegedly misclassifying its drivers as independent contractors—has picked up the ball and is expected to run hard with it.
Attorney Shannon Liss-Riordan today filed a class action lawsuit on behalf of three former IBM employees in their 50s and 60s, charging that when IBM fired them earlier this year, the company discriminated against them based on age.
More former employees are likely to join the class of plaintiffs; we at Spectrum for years have heard anecdotal reports from individuals that believe they were targeted for layoffs because of their age.
In an emailed statement, IBM indicated that any workforce changes were “about skills, not age. In fact, since 2010 there is no difference in the age of our U.S. workforce, but the skills profile has changed dramatically.”
This statement is hard to verify, because several years ago IBM stopped including any data about its U.S. workforce in its annual report—it no longer even reports the size of the workforce, much less the average age or skills profile.
Comments from former employees to the Facebook group Watching IBM were generally supportive of the class action suit. Said one commenter, “Many hundreds of people that I know, that were laid off in the March action, were part of the so-called strategic imperatives. Everyone in my group was over 50 and most of the people that I know personally were all over 45.”
Said another, reacting to the official IBM statement, “If our skills were obsolete, they would not have had us train our replacements in other countries before throwing us under the bus. They would have abandoned our obsolete practices.”
Last Monday, we covered the new, updated, and way way better guidelines for the ANA Avatar XPRIZE. Since we were mostly talking with the folks over at XPRIZE, we didn’t realize that ANA (All Nippon Airways) is putting a massive amount of effort into this avatar concept— they’re partnering with JAXA, the Japan Aerospace Exploration Agency, “to create a new space industry centered around real-world avatars.”
AVATAR X aims to capitalize on the growing space-based economy by accelerating development of real-world Avatars that will enable humans to remotely build camps on the Moon, support long-term space missions and further explore space from afar.
These avatars will be essentially the same sorts of things that the Avatar XPRIZE is looking to advance: Robotic systems designed to operate with a human in the loop through immersive telepresence, allowing them to complete tasks like a human could without a human needing to be physically there.
JAXA says that they’re interested in the usual stuff, like remote construction in space and maintenance, but also in “space-based entertainment and travel for the general public,” so use your imagination on that one. The AVATAR X program will go through several different phases, beginning quite sensibly with some Earth-based testing, which will happen at a new lab to be built in what looks like an artificial impact crater, with a futuristic building somehow hovering in the middle of it:
Of course, JAXA is not alone with this telepresence robots in space idea—for years, NASA has been suggesting that Valkyrie-like robots (likely controlled through a combination of full teleop, assistive teleop, and autonomy) are the best way to get stuff done in space, or in other places where humans are too expensive and squishy. Here’s a NASA rendering, for example:
That’s probably a bit far into the future, but in the nearer term, Robonaut was also intended to take over routine space station tasks. Things are maybe not moving quite as quickly as NASA has been hoping, but rumor has it that there will be a follow-up Space Robotics Challenge happening at some point. At the pace ANA and JAXA are going, though, it’s looking like the plan is to have operational hardware on the ISS within the next several years, which could mean that Robonaut (if, once repaired, it returns to the ISS) will have a robotic buddy up there to help it get some useful work done.
The actual schedule for Avatar X is a little bit unclear— there’s supposed to be on-Earth hardware testing in 2019, testing on the ISS in “202X,” and then the Moon and Mars “in the future.” What we do know is that one of the companies involved, Meltin, has already commenced “full-scale development on the MELTANT avatar robot for deployment in space.” MELTANT is this shiny-headed dude:
Be it in space, on the moon’s surface, or on the surface of mars, long-distance remote control robots like MELTANT will protect astronauts from the dangers of space while lowering the high cost of space missions. It will bring about improvements in safety, efficiency, and cost effectiveness to contribute to the advancement of space technology and the creation of usable regions in space.
Here’s a sampling of what MELTANT is supposed to be able to do:
So far, all of this is speculative, and MELTANT will need to prove itself here on Earth before proving itself anywhere else. As with all of these sorts of things, big ideas are easy, but getting robots to execute on them is hard. We’re glad that JAXA and ANA are putting some muscle behind this, and we’ll be tracking their progress carefully over the coming years.
[ JAXA ]
David Patterson—University of California professor, Google engineer, and RISC pioneer—says there’s no better time than now to be a computer architect.
That’s because Moore’s Law really is over, he says: “We are now a factor of 15 behind where we should be if Moore’s Law were still operative. We are in the post–Moore’s Law era.”
This means, Patterson told engineers attending the 2018 @Scale Conference held in San Jose last week, that “we’re at the end of the performance scaling that we are used to. When performance doubled every 18 months, people would throw out their desktop computers that were working fine because a friend’s new computer was so much faster.”
But last year, he said, “single program performance only grew 3 percent, so it’s doubling every 20 years. If you are just sitting there waiting for chips to get faster, you are going to have to wait a long time.”
For a computer architect like Patterson, this is actually good news. It’s also good news for innovative software engineers, he pointed out. “Revolutionary new hardware architectures and new software languages, tailored to dealing with specific kinds of computing problems, are just waiting to be developed,” he said. “There are Turing Awards waiting to be picked up if people would just work on these things.”
As an example on the software side, Patterson indicated that rewriting Python into C gets you a 50x speedup in performance. Add in various optimization techniques and the speedup increases dramatically. It wouldn’t be too much of a stretch, he indicated, “to make an improvement of a factor of 1,000 in Python.”
On the hardware front, Patterson thinks domain-specific architectures just run better, saying, “It’s not magic—there are just things we can do.” For example, applications don’t all require that computing be done at the same level of accuracy. For some, he said, you could use lower-precision floating-point arithmetic instead of the commonly used IEEE 754 standard.
The biggest area of opportunity right now for applying such new architectures and languages is machine learning, Patterson said. “If you are a hardware person,” he said, “you want friends who desperately need more computers.” And machine learning is “ravenous for computing, which we just love.”
Today, he said, there’s a vigorous debate surrounding which type of computer architecture is best for machine learning, with many companies placing their bets. Google has its Tensor Processing Unit (TPU), with one core per chip and software-controlled memory instead of caches; Nvidia’s GPU has 80-plus cores; and Microsoft is taking an FPGA approach.
And Intel, he said, “is trying to make all the bets,” marketing traditional CPUs for machine learning, purchasing Altera (the company that provides FPGAs to Microsoft), and buying Nervana, with its specialized neural-network processor (similar in approach to Google’s TPU).
Along with these major companies offering different architectures for machine learning, Patterson says there are at least 45 hardware startups tackling the problem. Ultimately, he said, the market will decide.
“This,” he says, “is a golden age for computer architecture.”
Trucking is vital to the way we live. Trucks haul the final miles between warehouses and stores. Supermarket shelves would be empty without their weekly deliveries. Long-haul trucks carry what’s needed to where it’s needed quickly and affordably. It’s fair to say that without trucking, life as we know it would not be possible.
While trucking may be essential, it comes with environmental costs associated with diesel engine emissions. Standards are getting tougher around the world. Europe is moving towards outlawing emissions altogether by 2030.
To make diesel engines greener, manufacturers have partnered with TE Connectivity to develop and apply the necessary fluid quality, pressure and temperature sensors used in after-treatment emission systems that reduce pollutants. SCR (Selective Catalytic Reduction) technology reduce NOx emissions. SCR technology relies on the appropriate dosing of urea (Diesel Emission Fluid – DEF) into the exhaust stream from the engine through a catalyst to reduce NOx into ammonia, nitrogen and oxygen. TE’s urea quality sensors ensure the concentration and quality of urea in the DEF fluid meets industry standards. If the ratio of urea to demineralized water is out of specification or the urea fluid is contaminated, the sensor provides this feedback to the engine control system which then adjusts engine operation to ensure the exhaust emissions meet environmental regulations.
Although cleaner diesel engines are an improvement, they still create carbon-based emissions. The future may lie in harnessing new technologies that rely on cleaner forms of energy.
Nikola gearbox close up
A FUEL FOR TOMORROW: HYDROGEN
One promising solution is a fully electric long-haul truck currently under development by Nikola Motor Company. Hydrogen fuel cells will create the current that charges the truck’s batteries and powers the drive train.
The idea of hydrogen as a fuel is not a new one. In fact, it precedes the era of oil. In 1806, Francois Isaac de Rivaz invented the first hydrogen-powered internal combustion engine. The hydrogen was held in a balloon.
By 1863, the hydrogen-powered Lenoir Hippomobile became the first successful commercial vehicle. Gas powered engines started appearing in 1870.1 Fast forward 150 years and technology has advanced to the point where hydrogen is ready for prime time. Hydrogen fuel cells have already proven themselves in space flight, and technology is on the cusp of making hydrogen commercially viable.
TURNING HYDROGEN INTO ELECTRICITY
While the technology behind the hydrogen fuel cell may be sophisticated, the science is a fairly basic electrochemical process. First, hydrogen gas meets an anode inside a fuel cell. Together with a catalyst, the anode splits the hydrogen in two creating a hydrogen ion (a proton) and an electron.
The reaction at the anode: 2H2 > 4H+ + 4e-
Attracted by the cathode, the positively charged hydrogen ions pass through an electrolytic membrane. Unable to pass through this membrane, the electrons flow through a wire outside the fuel cell, eventually arriving at the cathode and completing the circuit. At the cathode, both hydrogen particles mix with oxygen in the air where they form water and generate current.
The reaction at the cathode: O2 + 4H++4e- > 2H20
In the PEM (Proton Exchange Membrane) fuel cell used on a hydrogen-powered vehicle, semi bipolar plates are positioned on either side of the cell acting as both distributors as well as current collectors. The electrolyte is contained in a thin polymer membrane sandwiched between these plates. This membrane only allows protons to pass through it. In order to function properly, this membrane is kept moist by the water produced by the fuel cell.2
A typical fuel cell creates 0.6V to 0.7V per cell. To create sufficient voltage to charge the truck’s batteries, PEM fuel cells are connected in a series, otherwise known as a fuel cell stack. The greater the surface area of a fuel cell, the more current can be generated.3 By connecting cells in a series to build voltage and increasing cell area to boost current, it’s possible to generate the large amounts of electrical power necessary to power a class 8 truck.
The most frequently used catalyst in a hydrogen fuel cell is platinum. However, metallic bipolar plates like these can corrode and reduce the effectiveness of the fuel cell. Low temperature fuel cells feature lightweight metals, graphite or carbon/thermoset composites as bipolar plate material.4 A stack of lithium batteries collects the energy generated by the fuel cells and distributes it across four different motors, one for each wheel in the rear of the truck.
TE and Nikola Collaborate
Working with their engineering team, we’re demonstrating the capabilities of TE connectors in this high voltage environment.
ELECTRICITY MAKES IT RUN, COLLABORATION MAKES IT WORK
For the successful introduction of a major innovation like a fully electric class 8 truck, utmost importance is the safety and reliability of the vehicle. Nikola partnered with TE Connectivity to leverage our full portfolio of harsh environment connectors, sensors and high voltage solutions to ensure this high level of safety and reliability.
Working with their engineering team, we’re demonstrating the capabilities of TE connectors in this high voltage environment. For instance, TE’s high voltage connectors have integrated high voltage interlocks (HVIL) that ensure the system is safe from high voltage potentials when the connectors are in an unmated condition. The HVIL also ensures that the unmating of the terminals is completed with the high voltage potential removed. This eliminates any possibility of destructive arcing to occur between the terminals during the unmating process. The high voltage connectors are fully shielded to keep electromagnetic pulses from interfering with other critical circuitry. TE’s standard Deutsch connectors, known for their reliability in harsh environments, also carry-over to hydrogen fuel cell powered vehicles.
TE sensor technologies in E-motor resolvers, temperature, voltage, current, humidity, fluid quality, fluid level, position and pressure sensors, can provide robust solutions for various applications within the truck.
For instance, sensors play an important role in the stability of a fully electric truck. Around curves, the wheels on the inside of the curve should turn slower than those on the outside. In a standard truck, this is impossible since the wheels cannot turn independently. In the fully electric truck, each wheel motor powers its wheel independently. Thousands of times a second, the truck’s on-board computer samples sensor data from each wheel as well as the steering wheel, brakes, and accelerator pedal, then calculates how each wheel should respond. The motion control system sends this information to each motor telling it how to react. Because the inner wheels can slow while the outer wheels can accelerate, the truck has more control around curves, reducing the chances of fishtailing or rollovers.
As well, TE’s portfolio of high-speed interconnection solutions enable the reliable transfer of big data for various systems that play an important role in the final, road-ready Nikola One class 8 truck.
A NEW LEVEL OF EFFICIENCY
To keep a fully loaded truck moving requires a minimum of 1850 lb-ft of torque. 5 Diesel engines sacrifice acceleration and horsepower to generate sufficient torque. In the hydrogen electric truck, torque is instantaneous.6 Energy storage-to torque on an EV is more than 90% efficient rather than 35% for diesel engines.7 Since this new breed of trucks has an easier time generating torque, they can also have higher horsepower. The Nikola One’s 1000 horsepower system is capable of powering the truck up a mountain road at 65 miles an hour. Slow-moving trucks with blinking rear lights may one day be a distant memory.
The only emissions from a Nikola fully electric truck powered by hydrogen fuel cells are distilled water. But the benefits go far beyond the environmental.
Hydrogen filling is measured in kilograms. The Nikola hydrogen/electric system gets approximately 10 miles per kilogram of hydrogen. A modern diesel engine has to get 7.2 miles per gallon.8 So hydrogen is approximately 40% more efficient, since the potential energy in 1 kg of hydrogen is approximately the same as 1 gallon of diesel.
Because the hydrogen fuel cells create current as long as there’s hydrogen in the tank, they continually recharge the batteries. With an 70 kilogram tank, Nikola’s new breed of trucks have a range of 500 to 800 miles (depending on terrain) and a reserve charge of nearly 100 miles once all the hydrogen has been used. A plug-in battery electric truck would fall far short of that distance.
SO WHERE DO YOU GET A HYDROGEN FILL-UP?
Making the hydrogen/electric technology the future of trucking will require a new energy infrastructure to produce hydrogen, too. Rather than a central hub distributing hydrogen through pipelines, the current plan is to build a network of hydrogen-production plants at truck stops across the country. Powered by the electric grid as well as solar, wind, tidal, or geothermal energy, these mini-energy plants will give truckers a reliable source of hydrogen fuel to take them as far as they need to go and do it more affordably, more efficiently, more sustainably and more profitably than ever before.
Hydrogen Fill Station
At TE, we’re proud to play a part in shaping a greener, cleaner, more productive tomorrow for the trucking industry.
© 2017 TE Connectivity Ltd. family of companies. All Rights Reserved 09/2017
TE, TE Connectivity and TE connectivity (logo) are trademarks. Other product and/or company names might be trademarks of their respective owners.
Nikola Motor Company is a trademark.
1 http://fuel-efficient-vehicles.org/energy-news/?page_id=819
2 http://sepuplhs.org/high/hydrogen/hydrogen.html
3 http://www.fuelcellstore.com/fuel-cell-stacks
4 Karim Nice & Jonathan Strickland “How Fuel Cells Work” 18 September 2000.-HowStuffWorks.com. http://auto.howstuffworks.com/fuel-efficien- cy/alternative-fuels/fuel-cell.htm
5 http://www.autotraining.edu/blog/the-extraordinary-engine-configu- rations-of-18-wheelers/
6 https://www.carthrottle.com/post/how-do-electric-vehicles-produce- instant-torque/
7 https://cleantechnica.com/2013/09/16/instant-torque-and-blazing- speeds-the-best-thing-about-electric-cars/
http://www.popularmechanics.com/cars/trucks/g116/10-things-you- didnt-know-about-semi-trucks/?slide=5
Advancements in enabling connected cars are astonishing. From the time the first Model T rolled off the factory floor, cars’ functionality has been largely unchanged. When advances did happen, they were mostly mechanical: a bigger engine, more efficient transmissions, safer brakes, and more.
Today, we are witnessing a radical reimagining of the automobile. Advances in connectivity are creating opportunities in the automotive industry. Dashboard navigation, infotainment systems, and Bluetooth-enabled dashboards are a glimmer of what is coming in the not-so- distant future.
In 2015, McKinsey estimated that the number of networked cars would rise by 30% a year1. By 2018, automobiles with connected capabilities were almost 39% of the US market2. By 2020, Gartner estimates that 250 million connected vehicles will be on the roadways, “making [them] a major element of the Internet of Things”3. By 2022, the market penetration is expected to reach over 80%4. Much of this growth will start in premium cars and then the technology will filter down into the value segment.
Cloud connectivity, antennas capable of sharing data with many nodes both inside and outside the vehicle, sensors that create a safer and more informed driving experience and rugged, high-speed, in-vehicle data networks are all vital to achieving the seamless, connected, feature-rich automotive future consumers are demanding. TE Connectivity’s (TE) deep understanding of rigorous automotive standards as well as our unparalleled expertise in sensors, data networks, interconnects, and antenna technology can help accelerate success for carmakers in this burgeoning market.
One thing to keep in mind is that while all automobiles share much of the same technology, connected cars and autonomous cars are different topics. Connectivity is turning the car into smart devices with the potential to become crucial pieces in enabling the Internet of Things (IoT). Autonomy means cars gain the capacity to gather input for independent decision-making so that they can be self-reliant.
SENSORS: THE NERVOUS SYSTEM OF THE CONNECTED CAR
Since the late 1970s, electronically controlled sensors have been integral to automotive engineering due to emissions regulations from the United States Environmental Protection Administration (EPA) that required the use of catalytic converters5. This regulation drove the demand for sensors and helped create performance, safety, and comfort advantages. Car owners now expect advanced driver assistance systems (ADAS), adaptive cruise control (ACC), lane departure warning (LDW), traffic sign recognition (TSR), blind spot monitoring (BSM), and intelligent high-beam assistants with light ranging (ILB). Increasingly car owners want telematic modules and other units, such as those used for toll collecting and real time traffic reporting, or rain sensors that gathered weather information.
By 2020, new model cars will have upwards of 200 sensors measuring data within the car and around its immediate environment6. It’s estimated that these cars will be generating 4 terabytes of data per car per day.7
Sensing the World
For the connected car to reach its full potential, a key requirement will be the ability to capture correct and complete data about the surrounding environment. This starts with sophisticated sensor technology that determines a vehicle’s immediate environment. The technology that will prove critical in collecting this data includes high-resolution mono and stereo cameras, radar, and sensors capable of pinpointing objects up to 120 meters away—within one centimeter.
When transmitted by advanced Dedicated Short Range Communications (DSRC) antennas, sensor- generated data enables vehicle-to-everything (V2X) communication, which includes vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications.
When vehicle systems relay sensor-generated data, drivers can receive alerts about road conditions and driving hazards, such as congested roads, highway debris, or potholes—all well in advance of encountering these problems. When vehicle systems are connected to the roadway infrastructure, sensor generated data can supply accurate, real-time traffic data allowing mapping programs to plot the most efficient route, so drivers save time and greenhouse gas emissions are minimized.
“A lot of the sensors business is driven by what’s happening in electronics. So you see a lot of miniaturization, lower power. When you’re talking about a car, you’re talking about what’s going on inside of a cabin, what’s going on inside the engine. Or even what’s going on outside of a car.”
Byron Hill, VP, CTO, TE Sensor Solutions
Sensing Performance
Equally important will be internal sensing technology. TE offers a wide range of sensors for automotive applications including those that measure everything from position, speed, and humidity (in-cabin, engine air intake) to pressure (brake HPS, urea) and temperature. For example, humidity sensors are designed to improve performance, reduce energy consumption, and increase safety in environments where temperature affects performance. Our fluid property sensors bring real-time fluid monitoring to engines, fuel systems, selective catalytic reduction (SCR) systems, compressors, transmissions, gearboxes, and many other applications. Sensors within the engine, transmission, and braking systems will open the door to an era of predictive maintenance, where vehicles can schedule a visit to the mechanic before an issue arises.
Sensing Wellness
Biometric sensor technology will be another, increasingly important area for innovation. Soon, face, ocular, voice, or ECG technology will enable cars to recognize their driver. Rather than using a key or pressing a keyless start button, the driver will simply grab the steering wheel and embedded biometric sensors will start the car. Piezo sensors embedded in the car seat will monitor heart rate while dashboard cameras will track head movements to see if a driver is getting drowsy. Expertise in both medical device technology and consumer wellness applications—including health and fitness monitors—gives TE an advantage in developing and ruggedizing the miniature sensors that will make driver (and passenger) health monitoring an essential part of the connected driving experience.
ANTENNAS: THE HEART OF CONNECTIVITY
In all of the discussions about car connectivity, the antenna might be the component that’s taken for granted the most. But it’s a key component, along with sensors and in-vehicle data networks, of the connected car.
Typically, in the 1980s, cars had only one antenna: the AM/ FM whip antenna. With the introduction of GPS and cellular service, the number of antennas in cars increased. By the late 1980s, some cars in the US also had another antenna for 800MHz cellular band. With the advent of CDMA, PCS, and GSM in the 1990s, car designers were challenged with finding a way to mount all of the antennas needed to support the multiple bands required for cellular services. Today, there are in excess of 20 antennas in a typical car. There are antennas for 3G, 4G, and 4G LTE cellular service. There are Bluetooth antennas, satellite antennas for GPS, antennas in your tire for monitoring tire pressure, and antennas for on-board infotainment applications. Finding space to fit these antennas into a vehicle is an on-going design challenge.
DSRC Antennas
Soon, cars will feature even more antennas. They may have six antennas just for cellular service. To create robust V2V and V2I communications, cars could require up to six DSRC antennas to provide adequate redundancy and increase coverage. As cars come to rely more on the cloud, redundant long-range antennas will gain even more importance to help ensure constant connectivity and fail- safe reliability.
The Emergence of 5G
The exponential increase in data from sensors and the need for almost-instant insight has made latency a central issue. One of the limitations of current 4G technologies is that radio waves go from the car to the cell tower down to a base station on the ground below it, slowing the speed of transmission. 5G resolves this, providing data rates 30 to 50 times higher than 4G—with a much lower latency.
Instead of having a separate radio transmitter and antenna mounted on a tower, 5G networks integrate both in a single unit called an Active Antenna System (AAS). Each AAS will include up to 128 separate antennas with high-speed coaxial connectors between them and the radio.
When combined with massive multiple-input, multiple- output (MIMO) protocols, these units will enable vehicles to share and receive more than one radio signal simultaneously over the same radio channel. This will mean that the AAS will be asked to handle a content explosion.
When available, 5G networks will operate at speeds 100 times faster than 4G. 5G will handle 100 to 1000 times more machine-to-machine connections than 4G. Latency of less than 1 millisecond is significantly better than 4G. This will make real-time data streaming possible—both to and from the cloud—with streaming infotainment systems, in-vehicle VR systems, and many other applications including vehicle software updates.
Because of an incipient backup system of low earth orbit satellites with very low latency, 5G will encourage innovations around satellite antennas.
“Some of the techniques that we’re using today were only available to real high-end military applications 20 years ago. With 5G, what used to be a dumb passive antenna becomes a very sophisticated active antenna with high-speed connections inside it.”
Bruce Bishop, TE Fellow, Data & Devices
Challenges for Antennas
As antennas become more sophisticated, they must also provide long-term reliability in some of the harshest conditions. This means making antennas capable of withstanding temperatures as high as 185 degrees Fahrenheit (85o C), but not become brittle in subzero weather. Overall, road vibration, engine noise, and speed cannot impact performance.8
Also, as the number of antennas increase, and car manufacturers seek to reduce the amount of cabling and cut down on the complexity of installation, automakers are facing a significant design challenge: the size, location, and shielding of each antenna. Bringing antennas inside the car has required the miniaturization of antennas. It has also required a reduction in electromagnetic and radio frequency interference, which creates an opportunity for innovative design elements. To shield sensitive components from the noise caused by the increase in wireless energy from multiple antennas, TE has developed products with board level shielding (BLS).
Antenna Capabilities at TE
For several decades, TE has designed and manufactured some of the most innovative antennas solutions for consumer products. Our experience enables us to develop and build antennas suitable for the demands of today’s highly connected automobiles. We have developed and manufacture, antennas essential to On-Board Diagnostics (OBD). Our infotainment antenna and tuner systems enable near-perfect reception of broadcasting services as well as mobile radio and data services. Our recent acquisition of Hirschmann Car Communications positions us to develop value-adding solutions for your design.
Hirschmann Car Communications deepens our expertise in vehicle-to-vehicle and vehicle-to-infrastructure communications. Hirshmann Car Communications provides a long-standing know-how in high-frequency (HF) technology and the development and manufacturing of transmitter and receiver systems, signal detection, and signal processing.
Whether they choose off-the-shelf solutions or customer-specific applications, our customers benefit from our cross-disciplinary expertise. With our continued commitment to research and development, state-of-the-art testing and measurement facilities, and our high degree of flexibility in production, we’ll continue to evolve as the needs of automakers evolve.
AUTOMOTIVE ETHERNET: THE BACKBONE OF CONNECTIVITY
With the increase of data production from sensors within the car and the advent of V2V, V2I, and now vehicle-to- cloud (V2C) communication, cars are becoming a major part of the IoT. They are turning into the ultimate mobile device. Automotive Ethernet will play an essential role in its success.
For example, real-time communications with other vehicles and infrastructure will provide the car with the best possible database for predictive planning. The vehicle of the future will “know” much more about its immediate environment and the route ahead. ADAS will operate in support of the driver or perform an immediate action based on this increasingly detailed environmental model of the traffic situation ahead.
The magnitude of networking is summarized in a McKinsey study focusing on the connected car: “Today’s car has the computing power of 20 modern PCs, features about 100 million lines of code, and processes up to 25 gigabytes of data per hour. As the computing capacity of cars develops further, not only is programming becoming more complex and processing speeds becoming faster, but also the entire nature of the technology is shifting. While automotive digital technology once focused on optimizing the vehicle’s internal functions, the computing evolution is now developing the car’s ability to digitally connect with the outside world and enhance the in-car experience.”9
TE’s MATEnet modular and scalable connectors for automotive Ethernet
“We provide all the pieces to enable the connectivity stream. So whatever sensor you choose, whatever architecture you choose, whatever you want to keep on the car or bring it to the cloud, we have the connectivity components that help you to manage your system, your architecture, your stream of data.”
Dominique Freckmann, Manager, Silicon Valley Tech Office
Once vehicles become an integral part of the IoT, the amount of software in cars will continue to increase and its scope will expand. A key trend that is expected to affect growth is cyber security. A connected car needs protection against hacking and data theft. To protect and prevent these incidents, the vehicle’s software will require regular updates, through software-over-the-air (SOTA) distribution to install patches that eliminate weak spots. These updates will increase the amount of data traffic—both to the car and within the car.
Utilizing this wealth of data from all these sources requires high-speed, in-vehicle networking. The challenge is to not only provide more bandwidth for bigger data packages, but also meet the OEMs’ different approaches to vehicle electronic/electrical architecture. Electromagnetic Compatibility (EMC) specifications may require different types of interconnection technology as well. Automotive Ethernet therefore requires an intelligent interconnection solution that offers the flexibility, economy, and performance for differing EMC requirement levels.
THE MATEnet INTERCONNECTION SYSTEM
TE’s MATEnet interconnection is a proactive answer to today’s and tomorrow’s requirements for vehicle connectivity. MATEnet modular and scalable connectors are specifically developed for IEEE Automotive Ethernet networking. Its miniaturized, robust, automotive—grade technology; has successfully passed severe testing and validation.
TE’s MATEnet relies on NanoMQS terminals, which are automotive-grade contacts miniaturized, contacts that offer a particularly high robustness against vibration. Standard unsealed NanoMQS connectors meet severity level 2 vibration requirements (around 3g effective random and 30g shock). Sealed connector versions can also meet vibration level 3 (“close to powertrain”) and 4 (“engine mounting”).
MATEnet Cable Assembly (UTP/STP): Available cable types: UTP 100 Mbps / UTP 1 Gbps and STP 1 Gbps Possible Configurations: 1 port Frame – 1 port Frame 1 port Frame – 1 port Inline Coupler Female insert – Female insert Female insert – Male insert
However, high-frequency, high-bandwidth data transfer poses strict requirements to signal integrity. The effort and cost required to fulfill individual Automotive Ethernet application needs is typically a product of measures that are integrated in the chip and the effort that goes into the channel (cable and connectors).
TE’s MATEnet was designed to offer an optimum balance between both cost curves. It delivers an excellent cost- performance ratio because it neither places a high burden on chip capabilities (size, power consumption) nor does it require high-end cabling (materials, processes, complexity).
MATE-AS RF Interconnects
ADAS units may have as many as 24 coax inputs coming from the many sensors required for detecting objects and free space around a vehicle. Coaxial cables provide an economic and easy to handle physical layer for RF-signal transmission. But coax, automotive-grade interconnection solutions such as FAKRA may not provide enough RF-performance for ADAS.
The challenge for TE was to quadruple the number of coax cables that can be terminated as compared to a FAKRA connector, and to double the amount of lines in comparison to a High Speed Data (HSD) differential signaling connector.
MATE-AX is the result. MATE-AX provide an EMI resistant and miniaturized interconnection RF-solution for existing and future coax lines. With excellent signal integrity and the long-term potential for up to 20 GHz, MATE-AX take automotive coaxial technology to the next level of performance and facilitate transmission of large uncompressed data between signal sources and ECU “servers.” By offering a higher packaging density (4 coax cables in the space of 1 FAKRA connectors; 2 coax cables in the space of 1 HSD connection), MATE-AX terminals support significant size and weight reductions, and give the industry a solid, future-proof digital signaling roadmap.
MATEnet in the Future
—Power-over-Data-Line (PoDL) —
Currently, electronic devices in cars are supplied with electricity over separate cables. However, the additional cables make the harness heavier, more complex, and more difficult to feed parts of the harness through narrow passageways within vehicles. It would therefore be a contribution to downsizing and light-weighting of the harness, if separate power lines were not necessary. The new concept of Power-over-Data-Line (PoDL) supports this strategy in vehicle applications. It uses the 100Base-T1 and 1000Base-T1 interfaces to supply power parallel to the signal on a single unshielded twisted pair. TE is testing PoDL in combination with MATEnet and will validate this option within the MATEnet interconnection system. Testing so far has been very positive and indicates a beneficial use with up to 48 VDC.
—A2B Automotive Audio Bus® —
Our MATEnet is a potential interconnection technology for the Automotive Audio Bus® (A2B), developed by Analog Devices. This digital audio bus with up to 50 Mbit (50 million bits/second) bandwidth was designed to reduce the weight of high—fidelity wiring for automobiles while significantly reducing the weight of existing cable harnesses. One potential use is multiple microphone arrays within vehicles that allow different applications such as voice recognition, active noise cancellation, and in-car communications to perform.
— HDBaseT —
HDBaseT is an emerging technology, providing up to 6Gbps (6 billion gigabit/second) full duplex data transmission, through a 15m link segment, with up to 4 inline connectors at near zero latency. It includes various protocols, such as audio and video, Ethernet, power, and consumer applications including USB and HDMI. TE has an active part in the HDBaseT Alliance, where automotive stakeholders collaborate in various technical committees to define appropriate automotive requirements and specifications. Native Networking Capabilities demonstrates the usability of unshielded twisted pair cabling, which can be compared to 100 and 1000 BASET1 applications.
Addressing multiple challenges, HDBaseT with its unprecedented bandwidth and suitability for existing UTP cabling systems like TE’s MATEnet is a key enabler for future in-vehicle connectivity.
DATA AND THE DECENTRALIZED CLOUD
Making full use of the capabilities of the 5G-enabled cloud will require faster, more robust in-vehicle data networks as well as a new generation of 5G native antennas, receivers, and connectors. But, in a future with billions of mobile devices and trillions of Internet of Things (IOT) moving massive amounts of data, it will also mean evolving the architecture of the cloud away from a centralized model to one where computing applications, data, and services are pushed to the periphery (the “edge”) of a network.10
With Edge Computing, speed is a key parameter. For instance, with Mobile Edge Computing (MEC), the current goals are: data rates up to 6 Gbps, less than 1 ms latency, mobility at 500 km/hour, and terminal localization within 1 meter. What this means in practical terms is more continuity of service with high reliability.11 In situations where computing and response need to be instantaneous (i.e., with real-time traffic reports), these decentralized nodes would handle the data loads.
Computations for things like autonomous driving would also be handled at this level. The centralized cloud core would still handle applications. Applications that require robust computing power, data aggregation, and data storage.
TE’s long history of partnering with leading enterprise and hyperscale customers—combined with our experience in data and devices—offer OEMs practical intelligence that will help them evolve vehicle technology toward making possible new opportunities in networked intelligence.
THE TE ADVANTAGE: CROSS-DISCIPLINE INTELLIGENCE
No matter which technology path OEMs choose to innovate for the connected car, TE partners with these customers early in the design process to accelerate innovation. By applying our expertise in automotive standards and knowledge we’ ve gained from decades of working with customers, we can quickly meet evolving challenges and requirements in designing for the automotive industry.
Today, there are two developments driving the automotive industry towards compact, robust high-speed communication solutions: ever-increasing data rates and the demand for miniaturization. A significant challenge around these developments is designing technology that maintains signal integrity.
Although automotive manufacturers are now starting to talk about 10Gbps data rates, since the 1990s, we have manufactured 10Gbps products. We’ve shipped massive numbers of these high-speed components for almost 15 years. Combined with our understanding of automotive requirements, we are quickly adapting our solutions to meet the needs of automobile makers.
“The Data & Devices (D&D) business at TE is at the leading edge of speed and performance. For us, a 100, 400 gig connection is next generation high speed. What we’re finding is that there’s an increasing usage of D&D core products in connected cars, such as backplane connectors, internal cabling systems, or high-speed IO. This is creating synergies between the Data & Devices group and the Automotive group at TE Connectivity.”
Amitabh Passi, VP, Strategy and Business Development, TE Communications Solutions
Miniaturized, high-speed networks also pose a fundamental challenge in thermal management and EMI. Our innovations in BLS and heat sink integration have improved efficiency and performance. Initially developed for enterprise and hyperscale data centers, these are more relevant than ever to auto manufacturers. In the car, TE products connect almost every electrical function – from alternative power systems to infotainment and sensor technologies.
Board level shielding EMI shields from TE Connectivity
In addition, TE solutions help meet the evolving challenges and requirements of the auto industry:
Data connectivity: Technologies based on coax, shielded, optical, and wireless mediums.
Power and data distribution: High reliability transmission through connecting, switching, protecting, and sensing competencies.
Sensing: Data-driven technology to measure position, pressure, speed, temperature, humidity, and fluid quality.
Weight reduction through miniaturization: TE Nanos and MCON 0.50 interconnection systems enable a reduced size for electronic components, smaller wires, and a reduced total connector package.
Weight reduction through shift from copper to aluminum: When applied to a typical family-size car, the shift to aluminum conductors and TE’s LITEALUM crimp can save up to two or three kilograms of weight. This efficiency reduces the weight of the car and is achieved at lower material cost.
TE is also investing in the future of connected car technologies by devoting research and develop (R&D) to the fundamentals of core connectivity, such as:
More power: High-voltage connections with anti-arcing and emergency shutoff features
Relays and circuit protection
New architectures to handle extra features, gadgets and power demands
Bigger and faster data pipes to flawlessly handle exponential growing amounts of data inside and outside the vehicle
Optical data pipes (fiber optics)
Conductive data pipes (Ethernet)
Wireless data pipes (antennas for Wi-Fi, Bluetooth, 4G/LTE, dedicated short range communications)
More information from sensors to meeting increasing demands for closed loop control
We are also a leader in materials science. Since 1957, when we developed heat-shrink tubing for our Raychem product line, TE has invested in materials science research and innovation. Today, we reinvest five percent of our revenue in research and development. In partnership with leading research institutions and our customers, our engineers are constantly searching for ways to make solutions that are lighter, greener, tougher, or more conductive. Examples of advanced materials include high-performance carbon nanotubes that reduce weight in aerospace applications, conductive inks that are making manufacturing processes cleaner, and miniaturized components that are enabling a new generation of wearable technology. Through our Advanced Development Labs, we’re exploring technologies several generations out, as we work to develop essential technologies for an increasingly connected world.
This institutional intelligence informs all our product development for automotive standard solutions.
COMMITTED TO INNOVATION
TE is committed to providing connectivity and sensor solutions that enable OEMs to put more innovation into the connected car. We design and make our products smaller and lighter, with high reliability to perform as expected in the harshest environments.
© 2018 TE Connectivity Ltd. family of companies All Rights Reserved. TE Connectivity, TE Connectivity (logo), TE and Every Connection Counts, MATE-AX and MATEnet are trademarks. All other logos, products and/or company names referred to herein might be trademarks of their respective owners.
Gartner, “Gartner Says By 2020, a Quarter Billion Connected Vehicles Will Enable New In-Vehicle Services and Automated Driving Capabilities,” January 26, 2015: http://www.gartner.com/newsroom/id/2970017
https://www.statista.com/outlook/320/109/connected-car/united-states
https://www.yourmechanic.com/question/when-did-cars-first-start-using-sensors
McKinsey & Company: Connected car, automotive value chain unbound. September 2014, page 11 retrieved from: https://www.mckinsey.de/files/mck_ connected_car_report.pdf
Garcia Lopez, Pedro; Montresor, Alberto; Epema, Dick; Datta, Anwitaman; Higashino, Teruo; Iamnitchi, Adriana; Barcellos, Marinho; Felber, Pascal; Riviere, Etienne (2015-09-30). “Edge-centric Computing: Vision and Challenges”. ACM SIGCOMM Computer Communication Review
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
Let us know if you have suggestions for next week, and enjoy today’s videos.
The TU Delft MAVLab has been iterating on their DelFly flapping-wing robots for years, and this is the most impressive one yet:
Insects are among the most agile natural flyers. Hypotheses on their flight control cannot always be validated by experiments with animals or tethered robots. To this end, we developed a programmable and agile autonomous free-flying robot controlled through bio-inspired motion changes of its flapping wings. Despite being 55 times the size of a fruit fly, the robot can accurately mimic the rapid escape maneuvers of flies, including a correcting yaw rotation toward the escape heading. Because the robot’s yaw control was turned off, we showed that these yaw rotations result from passive, translation-induced aerodynamic coupling between the yaw torque and the roll and pitch torques produced throughout the maneuver. The robot enables new methods for studying animal flight, and its flight characteristics allow for real-world flight missions.
In a new paper, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), say that they’ve made a key development in this area of work: a system that lets robots inspect random objects, and visually understand them enough to accomplish specific tasks without ever having seen them before.
The system, called Dense Object Nets (DON), looks at objects as collections of points that serve as sort of visual roadmaps. This approach lets robots better understand and manipulate items, and, most importantly, allows them to even pick up a specific object among a clutter of similar — a valuable skill for the kinds of machines that companies like Amazon and Walmart use in their warehouses.
The team trained the system to look at objects as a series of points that make up a larger coordinate system. It can then map different points together to visualize an object’s 3-D shape, similar to how panoramic photos are stitched together from multiple photos. After training, if a person specifies a point on a object, the robot can take a photo of that object, and identify and match points to be able to then pick up the object at that specified point. This is different from systems like UC-Berkeley’s DexNet, which can grasp many different items, but can’t satisfy a specific request. Imagine a child at 18 months old, who doesn’t understand which toy you want it to play with but can still grab lots of items, versus a four-year old who can respond to "go grab your truck by the red end of it.”
In the future, the team hopes to improve the system to a place where it can perform specific tasks with a deeper understanding of the corresponding objects, like learning how to grasp an object and move it with the ultimate goal of say, cleaning a desk.
[ MIT ]
We demonstrated the high-speed, non-deformation catching of a marshmallow. The marshmallow is a very soft object which is difficult to grasp without deforming its surface. For the catching, we developed a 1ms sensor fusion system with the high-speed active vision sensor and the high-speed, high-precision proximity sensor.
Now they need a high speed robot hand that can stuff those marshmallows into my mouth. NOM!
We haven’t heard from I-Wei Huang, aka Crab Fu, in a bit. Or, more than a bit. He was one of the steampunk makers we featured in this article from 10 years ago! It’s good to see one of his unique steam-powered robot characters in action again.
Toot toot!
[ Crab Fu ]
You don’t need to watch the entire five minutes of this—just the bit where a 4-meter-tall teleoperated robot chainsaws a log in half.
Also the bit at the end is cute.
[ Mynavi ]
A teaser for what looks to be one of the longest robotic arms we’ve ever seen.
This is a little dizzying, but it’s what happens when you put a 360-degree camera on a racing drone.
[ Team BlackSheep ]
Uneven Bars Gymnastics Robot 3-2 certainly charges quickly, but it needs a little work on that dismount.
[ Hinamitetu ]
This video shows the latest results achieved by the Dynamic Interaction Control Lab at the Italian Institute of Technology on teleoperated walking and manipulation for humanoid robots. We have integrated the iCub walking algorithms with a new teleoperation system, thus allowing a human being to teleoperate the robot during locomotion and manipulation tasks.
Also, don’t forget this:
[ Paper ]
AEROARMS is one of the most incredible and ambitious projects in Europe. In this video we explain what AEROARMS is. It part of the H2020 science program of the European Comission. It was funded at 2015 and this year (2018) will be finished in September.
Yup, all drones should come with a pair of cute little arms.
[ Aeroarms ]
There’s an IROS workshop on “Humanoid Robot Falling,” so OF COURSE they had to make a promo video:
I still remember how awesome it was when CHIMP managed to get itself up again during the DRC.
[ Workshop ]
New from Sphero: Just as round, except now with a display!
It’s $150.
[ Sphero ]
This video shows a human-size bipedal robot, dubbed Mercury, which has passive ankles, thus relying solely on hip and knee actuation for balance. Unlike humans, having passive ankles forces Mercury to gain balance by continuously stepping. This capability is not only very difficult to accomplish but enables the robot to rapidly respond to disturbances like those produced when walking around humans.
[ UT Austin ]
Mexico was the partner country for Hannover Fair 2018, and KUKA has a strong team in Mexico, so it was a perfect excuse to partner with QUARSO on a novel use of robots to act as windows on the virtual world in the Mexico Pavilion.
A little Bot & Dolly-ish, yeah?
[ Kuka ]
For the first time ever, take a video tour of the award-winning Robotics & Mechanisms Laboratory (RoMeLa), led by its director Dr. Dennis Hong. Meet a variety of robots that play soccer, climb walls and even perform the 8-Clap.
[ RoMeLa ]
This week’s CMU RI Seminar is from CMU’s Matthew O’Toole on “Imaging the World One Photon at a Time.”
The heart of a camera and one of the pillars for computer vision is the digital photodetector, a device that forms images by collecting billions of photons traveling through the physical world and into the lens of a camera. While the photodetectors used by cellphones or professional DSLR cameras are designed to aggregate as many photons as possible, I will talk about a different type of sensor, known as a SPAD, designed to detect and timestamp individual photon events. By developing computational algorithms and hardware systems around these sensors, we can perform new imaging feats, including the ability to (1) image the propagation of light through a scene at trillions of frames per second, (2) form dense 3D measurements from extremely low photon counts, and (3) reconstruct the shape and reflectance of objects hidden from view.
[ CMU RI ]
I was scrolling through emails on my phone one recent morning when a strange message appeared among the usual mix of advertisements and morning newsletters. It was a confirmation for an upcoming doctor’s appointment in New York City, but came from an address I’d never seen before. And at the top, there was a friendly note: “I guess this is for you :)”
The note, I would later learn, was written by a Norwegian named André Nordum, whose email address is just a few letters different from my own. André, a 33-year-old banker in Oslo, had received the confirmation by mistake, thanks to my messy handwriting on an intake form.
Realizing this, he’d googled my name to try to track down my personal email address and forward the message to me. When he couldn’t easily find my address, he correctly guessed it based on the similarity of our last names (my surname, Nordrum, is also Norwegian).
All day, I thought about André’s act of digital kindness and the heartwarming fact that a stranger had spent time and effort trying to send me a bit of important information. I also felt a twinge of guilt: I’d received emails in the past—from car dealerships and day cares—that were clearly meant for other people, and I’d never forwarded any of them along. What does that say about me as a person?
André, it turns out, had also ignored the first email he’d received from my doctor. But when the second one arrived, he started to worry that I might actually miss an appointment. And, he later told me by email, “I did not want to get emails about your dermatology history for the foreseeable future.” (Don’t worry, André, I’ve updated my records.)
The whole situation reminded me of another strange case of mistaken identity. For years, people who tried to email my brother, Eric Nordrum, inadvertently sent their messages to a fellow named Emanuel Nordrum, whose email address is just one letter off from Eric’s.
Emanuel, who is (surprise) also Norwegian, patiently replied to many of those stray emails, alerting the senders and steering the messages back to Eric, their rightful recipient.
“This has been going on for a decade,” Emanuel told me recently from his home in Oslo, sounding pleased that I’d called to discuss the matter.
One of the first such emails to inadvertently land in Emanuel’s inbox was sent by my father on 3 August 2009. In a series of emails that he mistakenly sent to Emanuel over the years, my father wrote about playing pickleball, cheering for Ohio State’s football team, and the goings-on at our family’s former home in Adams County, Ohio.
In one note, he reported that a “broken and bleeding callous” that afflicted our family dog had miraculously healed. These are the kinds of mundane family details that would seem strangely intimate to share with a stranger. And yet there was Emanuel, privy to it all, thanks to the digital proximity of his email address to Eric’s own.
“It’s a little bit like sitting on the bus or overhearing somebody in the restaurant or something,” Emanuel told me. “They’re having a conversation that they think is for the family, and you just happen to be placed so that you can hear everything.”
Emanuel, a 35-year-old filmmaker, wrote back to that original message to tell my father that he was emailing the wrong person:
Hi,
I'm afraid you've got the wrong email address for your son.
My name is Emanuel Nordrum and I reside in chilly Norway, which is nowhere near Adams County. As such, I'll have to take you at your word about the Buckeyes, but I do hope your team wins, despite or with the odds, and that you enjoy the rest of your weekend.
When my father made the same mistake two years later, Emanuel politely responded again, writing, “Hi Jim, I'm afraid you got me again, rather than your son Eric.”
Through his good-natured replies, Emanuel struck up a conversation with my father—about the Winter Olympics, and the possibility that our families may share some Norwegian heritage.
Emanuel also responded to stray emails to Eric from my other brother, Kyle. Once, Emanuel forwarded a plane ticket confirmation and wished Kyle a pleasant trip to Las Vegas.
Emanuel proved helpful in other ways, too. One day in 2016, he was reading a Reddit thread about a website that lets U.S. citizens search for unclaimed funds they’re owed in various states. When Emanuel typed in his own name to test it out, “Eric Nordrum” popped up in the results.
Seeing this, Emanuel wrote a note to Kyle, politely explaining that his brother Eric, for whom Emanuel had wrangled countless emails over the years, may have unclaimed funds in Wisconsin (alas, that money turned out to be for another Eric Nordrum).
After years of serving as Eric’s digital concierge, the number of stray emails in Emanuel’s inbox gradually slowed to a trickle as my family updated their contact lists and learned to double check Eric’s address before hitting send. “The messages dropped off, and I was a little bit sad, actually,” Emanuel says.
Still, a few continue to make it through—enough to make Emanuel consider somehow transferring the email account over to Eric. “Pretty much the only stuff I get there now is from your family,” he says. (Curiously, Eric has never received an email meant for Emanuel.)
In 2017, Eric traveled to Norway on vacation and offered to buy Emanuel a beer in Lillehammer, where he was living at the time, for all of his trouble over the years. They met up and talked for hours, causing Emanuel to miss multiple buses to his classes at the Norwegian Film School.
In the end, my interactions with André and Emanuel reminded me of how downright pleasant it can sometimes be to interact with strangers on the Internet.
But after speaking with both of them, I still have more questions: Do these email mix-ups happen to everyone? Is it more likely to happen between people with common or uncommon names? Are Norwegians the most helpful and conscientious people on the planet?
If you’ve been on either side of one of these odd email interactions, let us know in the comments how you handled it.
When it comes to machine automation, hardware costs and complexity add up fast. As your requirements expand, so does your ever-growing list of hardware: a controller for robot control. Another for CNC. Another for machine vision. You end up with a lot of controllers, perhaps even proprietary, ultimately end up being a lot of systems to manage – and a lot of dollars out of your pocket.
Software-based machine control changes that paradigm. With the right software and a single real-time Windows PC, you can consolidate all of those controllers and their associated costs. Your Windows IPC becomes the only controller that you need. Simply by flipping a switch or moving an Ethernet cable, you can seamlessly switch from a robot controller to CNC controller to a GigE camera. No more separate infrastructure with separate costs, no need for data acquisition or control cards – just one integrated real-time Windows machine acting as an all-in-one controller.
What about the challenges of EtherCAT? While EtherCAT is recognized as the network standard for software motion control, it’s not without issues. That’s why KINGSTAR delivers auto-discovery, auto-configuration, and much more, all in a “plug-and-play,” open and standards-based environment.
Software-based machine automation also supports the modern needs of Industry 4.0 and the Industrial IoT (IIoT). It enables an OPC UA connection to the cloud for analytics, back-end needs, and security. A SCADA connection saves data to your database for real-time processing. And with new add-ons consistently added to the platform, you can keep up with the industry’s move to the cloud.
Watch the demo video below to learn how KINGSTAR helps you radically simplify your architecture by consolidating controllers and modernizing machine automation.
Data centers need to be ready to support the computing and performance demands required by 5G and IoT. Learn how to accelerate from 100GE to 400GE in the data center using advanced signal modulation and coding techniques such as PAM4 and FEC.
First, it correctly predicted the top four finishers at the Kentucky Derby. Then, it was better at picking Academy Award winners than professional movie critics—three years in a row. The cherry on top was when it prophesied that the Chicago Cubs would end a 108-year dry spell by winning the 2016 World Series—four months before the Cubs were even in the playoffs. (They did.)
Now, this AI-powered predictive technology is turning its attention to an area where it could do some real good—diagnosing medical conditions.
In a study presented on Monday at the SIIM Conference on Machine Intelligence in Medical Imaging in San Francisco, Stanford University doctors showed that eight radiologists interacting through Unanimous AI’s “swarm intelligence” technology were better at diagnosing pneumonia from chest X-rays than individual doctors or a machine-learning program alone.
“It went really well,” says Matthew Lungren, a pediatric radiologist at Stanford University Medical School, coauthor on the paper and one of the eight participants. “Before, we had to show [an X-ray] to multiple people separately and then figure out statistical ways to bring their answers to one consensus. This is a much more efficient and, frankly, more evidence-based way to do that.”
It was a small study, but the findings suggest that instead of replacing doctors, AI algorithms might work best alongside them in health care.
“We shouldn’t throw away human knowledge, wisdom, and experience,” says Louis Rosenberg, CEO and founder of Unanimous AI. “Instead, let’s look at how we can use AI to leverage those things.”
The company’s technology—a combination of AI algorithms and real-time human input—has also made headlines by correctly predicting Trump’s approval ratings, TIME’s Person of the Year, and the exact final score of Super Bowl 51, among others.
The current study is the company’s first foray into medicine.
Pneumonia is a particularly tricky disease to diagnose on X-rays alone because it looks like a lot of other illnesses. In the current study, eight radiologists in different locations sat in front of their computers and analyzed 50 chest X-rays from an open source data set. Each doctor was asked to predict how likely it was that the patient had pneumonia, based on the X-ray.
But this was not crowdsourcing—each doctor did not simply respond with a “yes” or a “no.” Instead, using the Swarm AI system—modeled on the collective decision-making process of honeybee swarms—each doctor controlled a small magnet icon that enabled them to push the group consensus toward their opinion. Every X-ray was examined in real-time with the other doctors simultaneously contributing opinions.
As the doctors weighed in, AI algorithms monitored the behavior of each participant, inferring how strongly each felt about their choice based on the relative motions of their icon over time. Someone who holds out longer on one choice, for example, may be expressing a stronger sentiment than someone who switches opinion quickly or several times.
“To really find the optimal solution, it’s not enough to just know what their opinions are, one really needs to know their varying levels of confidence,” says Rosenberg.
The algorithms then combined those preferences into a specific choice. Each deliberation took between 15 to 60 seconds, and the doctors diagnosed all 50 X-rays in about 90 minutes, says Rosenberg.
In the end, the Swarm AI system was 33 percent more accurate at correctly classifying patients than individual practitioners, and 22 percent more accurate than a Stanford machine-learning program called CheXNet. Last year, CheXNet beat radiologists at diagnosing pneumonia from X-rays.
The Swarm AI technology is unlikely to be used by radiologists for the hundreds of chest X-rays that cross their desks daily, says Lungren, but it could be especially useful in two key situations. First, it would be “insanely invaluable” in situations where international experts are asked to weigh in on difficult cases, he says.
Second, the technology enables doctors to each have an equal chance to influence a diagnosis. When a group of doctors meets to discuss a difficult case, which is common in large hospitals, some of the smartest people in the room may be introverts and their voices might not be heard, says Lungren. Swarm AI takes politics and personalities out of the process.
“The best way to get multiple humans to agree on something, so far, for us, has been the swarm,” says Lungren.
The team now plans to conduct a larger study using actual patient cases at the Stanford University Medical Center.
This content is brought to you by National Instruments.
The future of wireless communication is 5G. 5G wireless technology promises a rich, reliable, and hyperconnected world. But from new bands to wider bandwidth and new beamforming technology, 5G New Radio (NR) presents significant design and test challenges. Find a number of 5G related resources made available by National Instruments below.
Over the last several years, researchers have been hard at work exploring new concepts and technologies to answer the question "What is 5G?".
The Latest on 3GPP and ITU Standards
The 5G New Radio (NR) standard is here, and it's being tested and trialed right now.
InterOperability Device Testing (IODT) determines whether the base station and device can establish and maintain a robust communication link that can carry out 5G performance in prescribed test conditions.
In-Depth Webcast With a Focus on SIGINT and Electronic Warfare Available Now
Unmanned Aerial Vehicles (UAVs), commonly known as "drones," have been gaining popularity in recent years.
Networks must deal with tight latency constraints while keeping energy consumption in check
Emerging 5G Applications
In this white paper, learn about D2D and how it enables fifth generation (5G) wireless network communication from short-range wireless to vehicle-to- vehicle.
The connected car designs are also experiencing a constant proliferation in the number of vehicular radar systems while they are dealing with increasingly complex and independent subsystems.
As more devices are connected wirelessly, the need for wireless technologies that can handle increased data and capacity demands has grown exponentially.
Device-to-device (D2D) communication refers to the technology that allows user equipment (UE) devices to communicate with each other with or without the involvement of network infrastructures such as an access point or base stations.
As the first phase of 5G NR wraps up and the 3GPP finishes defining the communications protocol, the standards body also has identified specific frequency bands intended for 5G.
This evaluation includes LabVIEW Communication System Design Suite, which includes next-generation LabVIEW packaged with relevant add-ons specifically for rapidly prototyping communications systems.
3rd Generation Partnership Project (3GPP) members meet regularly to collaborate and create cellular communications standards. Currently, 3GPP is defining standards for 5G.
Build 5G wireless networks and systems with software defined radio
New automobiles are integrating electronic elements for safety, connectivity and clean energy consumption. With the increase in the number of electronic components, new vehicles need to be optimized for power efficiency, sensor fusion, communications and more. Accelerate development in e-mobility, autonomous driving and the connected car.
At an event today, Apple executives said that the new iPhone Xs and Xs Max will contain the first smartphone processor to be made using 7 nm manufacturing technology, the most advanced process node. Huawei made the same claim, to less fanfare, late last month and it’s unclear who really deserves the accolades. If anybody does, it’s TSMC, which manufactures both chips.
TSMC went into volume production with 7-nm tech in April, and rival Samsung is moving toward commercial 7-nm production later this year or in early 2019. GlobalFoundries recently abandoned its attempts to develop a 7 nm process, reasoning that the multibillion-dollar investment would never pay for itself. And Intel announced delays in its move to its next manufacturing technology, which it calls a 10-nm node but which may be equivalent to others’ 7-nm technology.
Apple’s new A12 Bionic is made up of six CPU cores, four GPU cores, and an 8-core “neural engine” to handle machine learning tasks. According to Apple, the neural engine can perform 5 trillion operations per second—an eight-fold boost—and consumes one-tenth the energy of its previous incarnation. Of the CPU cores, two are designed for performance and are 15 percent faster than their predecessors. The other four are built for efficiency, with a 50 percent improvement on that metric. The system can decide which combination of the three types of cores will run a task most efficiently.
Calling the A12 Bionic “an impressive feat,” VLSI Research analyst G. Dan Hutcheson says the chip “demonstrates that the attractiveness of staying on Moore’s Law has not diminished.”
Huawei’s chip, the Kirin 980, was unveiled at the IFA 2018 in Berlin on 31 August. It packs 6.9 billion transistors onto a one-square-centimeter chip. The company says it’s the first chip to use processors based on Arm’s Cortex-A76, which is 75 percent more powerful and 58 percent more efficient compared to its predecessors the A73 and A75. It has 8 cores, two big, high-performance ones based on the A76, two middle-performance ones that are also A76s, and four smaller, high-efficiency cores based on a Cortex-A55 design. The system runs on a variation of Arm’s big.LITTLE architecture, in which immediate, intensive workloads are handled by the big processors while sustained background tasks are the job of the little ones.
Kirin 980’s GPU component is called the Mali-G76, and it offers a 46 percent performance boost and a 178 percent efficiency improvement from the previous generation. The chip also has a dual-core neural processing unit that more than doubles the number of images it can recognize to 4,500 images per minute.
The Kirin 980 debuts in Huawei’s Mate 20 on 16 October. The first new generation iPhones start to ship on 21 September.
This post was corrected to show the right number of CPU cores and updated to include analyst comment and shipping dates.
Apple’s original Watch shipped with two optical heart rate sensors, mainly for tracking the wearer's heart rate during exercise. Later models added resting heart rate monitoring and alerts, but the device was still basically intended for use with fitness applications. A year ago, the company dipped a wrist into the medical pool, launching a study with Stanford to use the sensors to collect data on irregular heart rhythms and notify users who might be having episodes of atrial fibrillation, suggesting they contact their doctors for follow-up tests—like an electrocardiogram (ECG).
Today, Apple announced that it is building the ability to generate ECGs into its watch. The new Apple Watch Series 4 includes electrodes on the back of the watch (where it touches the wrist) and on the watch stem. To take an ECG, the person wearing the watch launches the app, then touches the watch stem with a finger for 30 seconds. (Putting one of the electrodes on the stem addresses one challenge wearable electrocardiographs have wrestled with—electrodes that are too close together don’t get a good signal.)
The watch then displays its interpretation of the ECG—either a normal rhythm or atrial fibrillation; the full ECG (those tracings we’re familiar with) can be stored as a PDF on the user’s iPhone and sent to a doctor for further interpretation. The device, reported Apple’s Chief Operating Officer Jeff Williams, is certified by the U.S. Food and Drug Administration (FDA); he said that it’s the first certified ECG monitor to be sold over the counter, directly to consumers.
Williams brought American Heart Association president Ivor Benjamin to the stage for an endorsement. Benjamin said, “Capturing heart rate data in real time is changing way we practice medicine. People often report symptoms that are absent during medical visits. The ability to access health data on demand is game changing.”
Apple’s Series 4 Watches with built-in ECG capability will sell from US $399 to $499, depending on other features.
In-hand manipulation is one of the things near the top of a very, very, very long list of things that humans do without thinking that are extraordinarily difficult for robots. It’s the act of repositioning an object with one hand, usually with your fingers—you do it whenever you pick up a pen, for example, to switch from a “picking up” grasp to a “writing something” grasp. Next time you do this, pay attention to the intricate, coordinated motion that happens, and ask yourself just how in the world you could honestly expect a robot to do something similar.
And yet, robots are learning to do such things. For example, OpenAI recently taught a five-fingered hand to manipulate a cube, which is great, if you have a lot of patience and/or computing resources, and the budget for a fancy hand and stuff. For those of us without wealthy (and occasionally eccentric) patrons, a conventional gripper is a more realistic option, and researchers from Yale University’s GRAB Lab have developed a two-finger design with a clever variable-friction system that can do in-hand manipulation at a fraction of the cost.
“Our approach is quite different and uses a very simple hand setup and a very simple controller, but with finger surfaces inspired by the unique biomechanical properties of the human finger pads, which change their effective friction as normal force is increased,” says Ad Spiers, a former GRAB Lab researcher and now a research scientist at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany. “This enables people to choose whether their fingers grip or slide over objects.”
The nice thing about your fingers is that you (probably) have bones in them, and they are (probably) surrounded by squishy tissue bits, and (hopefully) covered in textured skin. This is a phenomenal combination, since the bone lets you execute gripping force when you want to, while the skin and fat can maintain softer contact to let objects slide around. By using hard, friction-y contact in concert with softer sliding-y contact, you’re able to manipulate objects in all sorts of ways. Yale’s variable-friction fingers are an attempt to replicate this sort of functionality, through hardware that can turn friction on and off to alternate between gripping and sliding.
It’s a fairly simple system, compensating for lack of complexity with clever design. The fingers are mostly 3D printed, with a molded elastomer friction-enhanced insert. The low friction bits are just ABS. The fingers can be either friction-y, slide-y, passively switchable by using downward force on an elastically suspended element, or actively switchable with the help of a servo. The cheapest and simplest way is to go with the passively switchable version, which still allows the gripper to adopt different manipulation modes, as the GRAB Lab group, led by Professor Aaron Dollar, describes in a recent paper: “Manipulating an object between the fingers with a low-torque reference enables object sliding [translation], while manipulating with a high-torque reference enables pivoting.”
And if you have the budget for it, adding in those servos lets you do even fancier things, like rotating an object without translating it. In general, the gripper is able to achieve arbitrary object poses through in-hand manipulation, resulting in “a within-hand manipulation workspace beyond
the majority of the gripper designs in literature.” It seems like in order for this to be true you’d need a highly complex software controller or something, but it’s really not all that fancy, the researchers say:
An unsophisticated controller alternates between torque and position control for each finger, in order to maintain object grasp while also modulating grip force. This in turn varies finger surface friction to allow sliding, gripping and rolling of objects. The later addition of active fingers permits controlled sliding of [an] object on both fingers. This extends the capability of the hand and enables in-place rotation or proximal/distal transfer, for the fine positioning of objects within the gripper workspace.
The controller is open loop right now, but they’re working on a closed loop version that will allow the fingers to move objects into target poses, which will make the gripper much more useful. The researchers also suggest that the system they’ve come up with could relatively easily be integrated into other grippers to extend their capabilities.
Hopefully, this hardware will be miniaturized and combined with a variety of manipulation controllers to take advantage of other grasping techniques, like using a nearby surface for help (as shown in the video). Pure in-hand manipulation is cool, of course, but in a very practical sense, what’s important here is the objective: making it possible for robots to affordably and reliably do useful things in the real world, especially when it comes to things like prosthetics. And with that in mind, the researchers are in the process of open sourcing the design files to allow other folks to experiment with the hardware or develop more advanced controllers.
[ GRAB Lab ]
Do you like robots? Of course you do! You’re reading IEEE Spectrum, so you almost certainly love robots. Robots capture our imagination. Robots are the future. Now let us ask you this: What’s your favorite robot?
We bet some of you said R2-D2. Or maybe Rosie. Or Robby. Or Johnny 5. Or Data.
These are all cool robots. We like them too! But here’s the problem: Those are not real robots. We see this happen often, especially with kids. When you ask them which robots they find inspiring, the answers typically come from science fiction.
And why is that a problem? Because we’ve reached the point where it’s clear that robots are going to affect every aspect of our lives. It won’t happen overnight, but most of us will likely see the day when robotics will be everywhere—in our homes, offices, schools, factories, hospitals, streets, and even skies.
And that’s why it’s important to focus on real-world robots, not just sci-fi ones. We want more engineers to pursue careers in robotics. We want more kids to dream of becoming roboticists and technologists, or at least be sufficiently familiar with the details of the technology to make informed, thoughtful, and ethical decisions in the future. So we need to make real robots just as inspiring as their fictional counterparts, and here at Spectrum, we have a plan to do just that.
Over the past year, we’ve been creating a massive portal for everything robotics, built around a fun and unique dynamic catalog. You can see it right now at Robots.ieee.org. There you’ll find a vast zoo of humanoids, drones, exoskeletons, quadrupeds, and other kinds of automatons, each with its own profile, with photos, videos, curious facts, and technical specifications. (We’re currently in beta, preparing for a full launch next month.)
Many profiles also have special interactives that you won’t find anywhere else: You can spin robots 360 degrees or make them move. Take joy in making the robot baby iCub crawl across the screen, wiggling the fingers of NASA’s space humanoid Robonaut, or swapping the facial expressions of lifelike android Geminoid DK.
We also want to know how you feel about all these different robots. You can rate them based on their capabilities and appearance, and then you can see how each robot ranks against the others. Based on users’ votes, we’ve created rankings to see which robots are the Top Rated, Creepiest, and Most Wanted.
The site also has a robotics news section, a game called Faceoff, and an educational section that will feature lesson plans and other STEM (science, technology, engineering, math) materials for schools interested in learning about real robots from industry, research, and startups.
If our robot guide happens to look familiar, that’s because it’s an expansion of Spectrum’s popular Robots for iPad app. If you know the app, thank you for being a user—an update is coming soon for iOS and Android. In the meantime, we invite you to check out Robots.ieee.org on your desktop, tablet, or phone for the latest content.
Our collection currently has 157 robots, and we’re going to be adding more soon. Our goal is to have robots of all types and sizes and from as many countries as possible.
So we ask again: What is your favorite robot? Go to the site and check out the robots we already have. Send us your suggestions of new robots to add by emailing hellorobots@ieee.org. Get them in by 15 October, and you may win an exclusive robot T-shirt. Go robots!
This is sponsored content and is brought to you by University of Maryland.
Place talented engineers from different disciplines in close contact with one another, and they'll create the powerful insights and innovations needed to take on today's most complex engineering challenges. Explore how the A. James Clark School of Engineering’s research efforts are taking on some of society’s most pressing needs.
University of Maryland engineers revive an old chemistry with a new electrolyte.
Clark School researchers take a new computational approach to understanding brain's function.
Maryland researchers advance networked control and virtual reality for added safety.
When it comes to things that are ultrafast and lightweight, robots can't hold a candle to the fastest-jumping insects and other small-but-powerful creatures.
Maryland Researchers Develop Robots With the Same Capabilities as Fish
University of Maryland researchers demonstrate the first single-photon transistor using a semiconductor chip
Maryland vertical take-off and landing designs receive top honors in the Vertical Flight Society's annual student design competition
Imagine an autonomous coaxial-proprotor swing-wing tailsitter that, like the high-altitude hummingbirds of Ecuador, leverages visual sensory information and adjustable wing geometry to maneuver in megacity environments. Let’s call it Metaltail.
Now picture a coaxial proprotor tailsitter configuration that utilizes a novel variable incidence boxwing and a bidirectional ducted fan, all in a vehicle weighing 532.6 kilograms that can hover and fly up to 426 kilometers per hour. Call it Kwatee.
Both of these designs received top honors in the Vertical Flight Society’s annual Student Design Competition, which challenges students to design a vertical lift aircraft that meets specified requirements.
In the graduate category, the University of Maryland (UMD) and Nanjing University of Aeronautics and Astronautics placed first for designing an autonomous coaxial-proprotor swing-wing tailsitter that used visual sensory information and adjustable wing geometry to maneuver in megacity environments. Lightweight turboshaft engines and an aerodynamic design were also included. The team named their design the “Metaltail” for the Tyrian Metaltail hummingbirds, which have the agility to hover precisely in place in complex and dynamic environments. They also developed a FLIGHTLAB model of their vehicle, winning the optional bonus portion of the competition.
In the undergraduate category, UMD’s team won first place for designing a coaxial proprotor tailsitter with blades that optimize the compromise between hover and propulsive efficiencies through an extensive parametric sweep of 7,700 airfoils, taper and twist rates. The team called this the “Kwatee,” which boasted two flight modes with the capability for navigating in megacity environments, a maximum dash speed of 426 kilometers per hour, an extended range of 354 kilometers, and an endurance of 4.1 hours.
UMD has a strong record of excellence at this competition. Last year, UMD also placed first in both the graduate and undergraduate categories, and, in 2016 and 2015, UMD won the top award in the graduate category.
This year, the US Army Research Laboratory (ARL) sponsored the competition with a total of $13,000 in prize money. The teams will be awarded a cash stipend and will be invited to the Vertical Flight Society's Annual Forum and Technology Display in May 2019 to present the details of their designs. Team members receive complimentary registration to the Forum, a technical event that promotes vertical flight technology advancement.
Last week, I sat down with Intel’s Gadi Singer, vice president and general manager of artificial intelligence architecture, and Chris Rice, head of the company’s AI talent acquisition, to talk about AI workforce issues. Here’s what they had to say.
On reports about huge and growing shortages of AI engineers forcing some companies to pay million-dollar salaries:
“What you see in the articles is relatively the truth,” said Rice. “One of the interesting things in AI is that it’s no longer just the technology companies that play in this space, you’ve got the finance industry, medical, retail, mobility, manufacturing—they are all starting to recruit AI engineers, whether they are developing a technology or applying a technology. Because of that, there is an increased global demand, and that is driving up the value of those engineers.”
But, interjected Singer, remember that AI is not one skill, one job description. “It is a diverse set of skills. You’ve got hardware architect, you’ve got designers, software developers, data scientists, and researchers.”
Given that the hottest area in AI is deep learning, encompassing all the neural network–related techniques, Singer continued, “people who have expertise in knowing how to develop those new techniques, these topologies, or how to implement them in the most efficient manner in software and hardware obviously have high value, he said.”
The other thing driving value, Singer said, “is that the frontier in this space is moving faster than any technology that I’ve seen. The state of the art in deep learning in 2016 is called ‘legacy’ by 2018. So, people who have the ability to continuously learn and be on or ahead of this fast-moving frontier of deep learning are obviously very valuable.”
On efforts to fill the pipeline by educating more AI engineers:
“There are a couple of issues with this,” said Rice. “Academic institutions globally have started doing pretty well in putting emphasis on this realm of skill sets. But a lot of the research is actually being conducted in industry because of that fast innovation cycle, so industry is actually hiring a lot of professors out of academia. That’s a confounding issue: industry is moving to pull more people out of academia at a faster rate than it can produce them.”
On the brighter side, Singer says he sees an increase in the number of students and in the number of classes being offered.
And helping it all, he pointed out, is the cool factor. “Consider data science,” Singer said. “Data science used to be considered something dull, some area of statistics. But today, data science is really cool. And that attracts talent into academia, into industry.”
“So,” he indicated, “even though the need is strong, it translated into a pull, both in academia and in the larger population, that will eventually increase supply.”
On retaining a company’s AI engineers:
”We are seeing, in data science, that people are changing jobs every 21 months,” Said Rice. “There is a higher turnover not because people want to bounce around between jobs, but because the problems they are working on are so diverse. They go from one place to another place to work on new and interesting problems.” The result, he says, is that companies struggle to find ways to keep them. “You have to be much more aggressive in the way you structure tenure for a person when they come into an organization,” Rice says.
This sounds simple, but it is powerful, said Singer: “Make it a fun and growing experience. So rather than talking in terms of packages and so on, the most significant factor for many of the top talents is: Do they feel that they are doing something that is at the leading edge of technology? Do they feel that they learn, so that, year over year, they grow? Do they feel that the work they are doing matters?”
Rice adds that today’s top engineers want to do more than just sitting in a cubicle punching out lines of code. Rather, “they want to work on something that is going to help the bigger society.”
On the importance of a diverse workforce in preventing algorithm bias:
“The way you train machine learning has an impact on the way it sees the world,” said Singer. “With example-based, supervised learning, the set of examples being used impacts how it is analyzed. The best way to deal with [preventing bias] is having diverse teams. When the team has diversity, and looks at the problem from multiple angles, the solution it creates is going to have a well-rounded view.”
For the longer term, he envisions the solution being to train sophisticated AI systems of the future what bias is, how to spot it, and how to avoid it. “That is not something that can be done with today’s technologies, but mid- to long-term, I see this capability evolving,” Singer said.
On why kids today should consider aiming for a career in AI:
“AI has the advantage of being both highly impactful and multidisciplinary,” said Singer. That means it can support a variety of interests: “Whether you want to go more on the human side of the interaction, or are more statistics oriented, or go more to programming or engineering; each of those [corresponds to] a branch of AI. For example, for someone who really wants to work on health-related areas—attending to elderly, for example—those have elements in AI. And because it is so diverse, it allows you to connect to it with whatever is the special thing that makes you, you.”
The bottom line according to Rice: In the future it is going to be hard to have a career that doesn’t involve AI. “I have a very young child,” he said, “so I am of the assumption at this point that any career she has is going to have artificial intelligence implications.”
Insects are quite good at not running into things, and just as good at running into things and surviving, but targeted, accurate precision flight is much more difficult for them. As cute as insects like bees are, there just isn’t enough space in their fuzzy little noggins for fancy sensing and computing systems. Despite their small size, though, bees are able to perform precise flight maneuvers, and it’s a good thing, too, since often their homes are on the other side of holes not much bigger than they are.
Bees make this work through a sort of minimalist brute-force approach to the problem: They fly up to a small hole or gap, hover, wander back and forth a little bit to collect visual information about where the edges of the gap are, and then steer themselves through. It’s not fast, and it’s not particularly elegant, but it’s reliable and doesn’t take much to execute.
Reliable and not taking much to execute is one way to summarize the focus of the next generation of practical robotics—in other words, robotic platforms that offer affordable real-world autonomy. The University of Maryland’s Perception and Robotics Group has been working on a system that allows a drone to fly through very small and completely unknown gaps using just a single camera and onboard processing. And it’s based on a bee-inspired strategy that yields a success rate of 85 percent.
We’ve posted before about autonomous drones flying through small gaps, but the big difference here is that in this case, the drone has no information about the location or size of the gap in advance. It doesn’t need to build up any kind of 3D map of its environment or model of the gap, which is good because that would be annoying to do with a monocular camera. Instead, UMD’s strategy is to “recover a minimal amount of information that is sufficient to complete the task under consideration.”
To detect where the gap is, the drone uses an optical-flow technique. It takes a picture, moves a little bit, and then takes another picture. It identifies similar features in each picture, and thanks to parallax, the farther-away features behind the gap will appear to have moved less than the closer features around the gap. The edges of the gap are the places where you’ve got the biggest difference between the amount that features appear to have moved. And now that you know where all those things are, you can just zip right through!
Or, almost. The other piece of this is using visual servoing to pass through the gap. Visual servoing is just using visual feedback to control motion: The drone takes a picture of the gap, moves forward, takes another picture, and then adjusts its movement to make sure that its position relative to the gap is still what it wants. This is different from a preplanned approach, where the drone figures out in advance the entire path that it wants to take and then follows it—visual servoing is more on the fly. Or, you know, on the bee.
The UMD researchers tested this out with a Bebop 2 drone packing an Nvidia Jetson TX2 GPU. A variety of different gaps of varying sizes and shapes were cut in a foreground wall, which was covered in newspapers to give them some extra texture, and this is where we’re obligated to point out that this technique probably won’t work out if you’re trying to fly through a gap in one white wall with another white wall on the other side. Anyway, as long as you’ve got newspapered walls, this system works quite well, the researchers say: “We achieved a remarkable success rate of 85 percent over 150 trials for different arbitrary shaped windows under a wide range of conditions which includes a window with a minimum tolerance of just 5 cm.”
The maximum speed that the drone was able to achieve while passing through the gap was 2.5 meters per second, primarily constrained by the rolling shutter camera (which could mess up the optical flow at higher speeds), but again, this method isn’t really intended for high-performance drones. Having said that, the researchers do mention in the conclusion of their paper that “IMU data can be coupled with the monocular camera to get a scale of the window and plan for aggressive maneuvers.” So, hopefully we’ll be seeing some of that in the near future.
[ UMD ]
Wi-Fi Protected Access 2, or WPA2, had a good run. But after 14 years as the go-to wireless security protocol, cracks inevitably start to show. That’s why, over the summer, the Wi-Fi Alliance announced the protocol’s successor, WPA3, after teasing its capabilities in press releases since the beginning of the year.
But the Wi-Fi Alliance, which is the organization responsible for certifying products that use Wi-Fi, might not have done everything it could have done to bring wireless security entirely up to date, at least according to one outside researcher. Mathy Vanhoef, the researcher at KU Leuven in Belgium who discovered the WPA2-crippling KRACK attack in 2016, believes the Wi-Fi Alliance could have done a better job of investigating alternatives for security protocols and certifications.
The big change from WPA2 to WPA3 is in the way devices greet a router or other access point to which they are trying to connect. WPA3 introduces a greeting, or handshake, called a Simultaneous Authentication of Equals (SAE). There are more details in this post, but the upshot is that SAE, also known as a dragonfly handshake, prevents attacks (like KRACK) that interrupt the handshake method in WPA2. It ensures that the exchange of keys to prove each device’s identity can’t be interrupted by treating both the device and the router as equals, as the name implies. Previously, such exchanges had an inquirer (typically the device) and an authorizer (the router).
So SAE solves some big vulnerabilities of WPA2—an important step, but maybe not enough. According to Vanhoef, the scuttlebutt in the security community is that the dragonfly handshake will prevent debilitating attacks like KRACK, but questions remain regarding whether it is good enough beyond that.
Vanhoef says mathematical analyses of dragonfly handshakes suggest that they should be secure. “On the other hand, there were some comments and critiques [suggesting] that there were other options,” he says. “The chance that there could be some small issues is higher than with other handshakes.”
One concern that’s been raised is the possibility of side-channel attacks, specifically timing attacks. While SAE is resilient to attacks that interrupt the greeting directly, it could be vulnerable to more passive attacks that observe the timing of the authentication and glean some information about the password based on that.
In 2013, researchers at Newcastle University found in their cryptanalysis of SAE that the handshake is vulnerable to so-called small subgroup attacks. These attacks force the keys exchanged by the router and the connecting device to be limited to a much smaller, more solvable subgroup of options than the very large amount traditionally available. To patch this vulnerability, the researchers suggested that SAE be augmented with an additional key validation step, sacrificing some of the handshake’s efficiency in the process.
SAE does protect against the attacks that exploited WPA2’s shortcomings though. Kevin Robinson, the vice-president of marketing for the Wi-Fi Alliance, says it renders off-line dictionary attacks impossible. These attacks are possible when an attacker can test thousands or hundreds of thousands of possible passwords in quick succession without raising the network’s suspicions. SAE also offers forward secrecy—if an attacker does gain access to a network, any data sent to or from the network before that point will remain secure, which was not the case in WPA2.
When the Wi-Fi Alliance first announced WPA3 in a press release last January, it announced a “suite of features” to improve security. The release hinted at four features in particular. One, SAE, became the core of WPA3. Another, a 192-bit encryption scheme, is optional for large corporations or financial institutions making the switch to WPA3. The other two features never made it to WPA3.
The features that didn’t make the cut exist as entirely separate certification programs. The first, Easy Connect, makes it simpler for users to connect their IoT devices to their home networks. The other, Enhanced Open, provides more protection for open networks, like the ones at airports and coffee shops.
“The Wi-Fi Alliance, I think, purposefully kept their press release at the beginning of the year vague,” says Vanhoef. “They didn’t promise anything would be part of WPA3. The speculation was that all of it would be mandatory. Only the dragonfly handshake is mandatory, which is a shame, I think.”
Vanhoef’s worry is that three separate certification programs—WPA3, Easy Connect, and Enhanced Open—will confuse users rather than covering them all under the WPA3 umbrella. “You have to tell ordinary users to use Easy Connect and Enhanced Open,” he says.
For its part, the Wi-Fi Alliance believes the separate certification programs will reduce user confusion. “It’s important for the user to understand there is a difference between WPA3 and Enhanced Open, which is still ultimately an open network,” says Robinson. Likewise, he says the industry groups that make up the Wi-Fi Alliance felt it was important for Easy Connect to offer streamlined connections to devices that still used WPA2, rather than limit it to new devices.
Still, regardless of whether users are confounded or reassured by the Wi-Fi Alliance’s suite of new certification programs, Vanhoef believes the Wi-Fi Alliance could have been more open about its selection process. “They did it in a closed manner,” he says. “It made it hard for the security experts and cryptographers to comment on it,” leading to the concern over SAE’s potential vulnerabilities and the multiple certification programs.
Vanhoef also points out that an open process could have resulted in an even stronger WPA3. “What we see a lot is a very secretive process,” he says. “And then we find out the security is very weak. In general, we’ve found it’s better to be open.”