IEEE News

IEEE Spectrum IEEE Spectrum

  • China Aims for a Permanent Moon Base in the 2030s
    by Andrew Jones on 22. September 2021. at 19:00

    On 3 January 2019, the Chinese spacecraft Chang'e-4 descended toward the moon. Countless craters came into view as the lander approached the surface, the fractal nature of the footage providing no sense of altitude. Su Yan, responsible for data reception for the landing at Miyun ground station, in Beijing, was waiting—nervously and in silence with her team—for vital signals indicating that optical, laser, and microwave sensors had combined effectively with rocket engines for a soft landing. "When the [spectral signals were] clearly visible, everyone cheered enthusiastically. Years of hard work had paid off in the most sweet way," Su recalls. Chang'e-4 had, with the help of a relay satellite out beyond the moon, made an unprecedented landing on the always-hidden lunar far side. China's space program, long trailing in the footsteps of the U.S. and Soviet (now Russian) programs, had registered an international first. The landing also prefigured grander Chinese lunar ambitions. In 2020 Chang'e-5, a complex sample-return mission, returned to Earth with young lunar rocks, completing China's three-step "orbit, land, and return" lunar program conceived in the early 2000s. These successes, together with renewed international scientific and commercial interest in the moon, have emboldened China to embark on a new lunar project that builds on the Chang'e program's newly acquired capabilities. The International Lunar Research Station (ILRS) is a complex, multiphase megaproject that the China National Space Administration (CNSA) unveiled jointly with Russia in June in St. Petersburg. Starting with robotic landing and orbiting missions in the 2020s, its designers envision a permanently inhabited lunar base by the mid-2030s. Objectives include science, exploration, technology verification, resource and commercial exploitation, astronomical observation, and more. ILRS will begin with a robotic reconnaissance phase running up to 2030, using orbiting and surface spacecraft to survey potential landing areas and resources, conduct technology-verification tests, and assess the prospects for an eventual permanent crewed base on the moon. The phase will consist of Chinese missions Chang'e-4, Chang'e-6 sample return, and the more ambitious Chang'e-7, as well as Russian Luna spacecraft, plus potential missions from international partners interested in joining the endeavor. Chang'e-7 will target a lunar south pole landing and consist of an orbiter, relay satellite, lander, and rover. It will also include a small spacecraft capable of "hopping" to explore shadowed craters for evidence of potential water ice, a resource that, if present, could be used in the future for both propulsion and supplies for astronauts. CNSA will help select the site for a two-stage construction phase that will involve in situ resource utilization (ISRU) tests with Chang'e-8, massive cargo delivery with precision landings, and the start of joint operations between partners. ISRU, in this case using the lunar regolith (the fine dust, soil, and rock that makes up most of the moon's surface) for construction and extraction of resources such as oxygen and water, would represent a big breakthrough. Being able to use resources already on the moon means fewer things need to be delivered, at great expense, from Earth. The China National Space Administration (CNSA) recently unveiled its plans for a lunar base in the 2030s, the International Lunar Research Station (ILRS). The first phase involves prototyping, exploration, and reconnaissance of possible ILRS locations.James Provost The utilization phase will begin in the early 2030s. It tentatively consists of missions numbered ILRS-1 through 5 and relies on heavy-lift launch vehicles to establish command, energy, and telecommunications infrastructure; experiment, scientific, and IRSU facilities; and Earth- and astronomical-observation capabilities. CNSA artist renderings indicate spacecraft will use the lunar regolith to make structures that would provide shielding from radiation while also exploring lava tubes as potential alternative areas for habitats. The completed ILRS would then host and support crewed missions to the moon in around 2036. This phase, CNSA says, will feature lunar research and exploration, technology verification, and expanding and maintaining modules as needed. These initial plans are vague, but senior figures in China's space industry have noted huge, if challenging, possibilities that could greatly contribute to development on Earth. Ouyang Ziyuan, a cosmochemist and early driving force for Chinese lunar exploration, notes in a July talk the potential extraction of helium-3, delivered to the lunar surface by unfiltered solar wind, for nuclear fusion (which would require major breakthroughs on Earth and in space). Another possibility is 3D printing of solar panels at the moon's equator, which would capture solar energy to be transmitted to Earth by lasers or microwaves. China is already conducting early research toward this end. As with NASA's Artemis plan, Ouyang notes that the moon is a stepping-stone to other destinations in the solar system, both through learning and as a launchpad. The more distant proposals currently appear beyond reach, but in its space endeavors China has demonstrated a willingness to develop capabilities and apply these for new possibilities. Sample-return tech from Chang'e-5 will next be used to collect material from a near-Earth asteroid around 2024. Near the end of the decade, this tech will contribute to the Tianwen-1 Mars mission's capabilities for an unprecedented Mars sample-return attempt. How the ILRS develops will then depend on success and science and resource findings of the early missions. China is already well placed to implement the early phases of the ILRS blueprint. The Long March 5, a heavy-lift rocket, had its first flight in 2016 and has since enabled the country to begin constructing a space station and to launch spacecraft such as a first independent interplanetary mission and Chang'e-5. To develop the rocket, China had to make breakthroughs in using cryogenic propellant and machining a new, wider-diameter rocket body. This won't be enough for larger missions, however. Huang Jun, a professor at Beihang University, in Beijing, says a super heavy-lift rocket, the high-thrust Long March 9, is a necessity for the future of Chinese aerospace. "Research and breakthroughs in key technologies are progressing smoothly, and the project may at any time enter the engineering-development stage." CNSA's plans for its international moon base involve a set of missions, dubbed ILRS-1 through ILRS-5, now projected between 2031 and 2035. IRLS-1, as planned, will in 2031 establish a command center and basic infrastructure. Subsequent missions over the ensuing four years would set up research facilities, sample­ collection systems, and Earth­ and space­observation capabilities.James Provost The roughly 100-meter-long, Saturn V–like Long March 9 will be capable of launching around 50 tonnes of payload to translunar injection. The project requires precision manufacturing of thin yet strong, 10-meter-diameter rocket stages and huge new engines. In Beijing, propulsion institutes under the China Aerospace Science and Technology Corp., recently produced an engineering prototype of a 220-tonne thrust staged-combustion liquid hydrogen/liquid oxygen engine. In a ravine near Xi'an, in north China, firing tests of a dual-chamber 500-tonne-thrust kerosene/liquid oxygen engine for the first stage have been carried out. Long March 9 is expected to have its first flight around 2030, which would come just in time to launch the robotic ILRS construction missions. A human-rated rocket is also under development, building on technologies from the Long March 5. It will feature similar but uprated versions of the YF-100 kerosene/liquid oxygen engine and use three rocket cores, in a similar fashion to SpaceX's Falcon Heavy. Its task will be sending a deep-space-capable crew spacecraft into lunar orbit, where it could dock with a lunar-landing stack launched by a Long March 9. The spacecraft itself is a new-generation advance on the Shenzhou, which currently ferries astronauts to and from low Earth orbit. A test launch in May 2020 verified that the new vessel can handle the greater heat of a higher-speed atmospheric reentry from higher, more energetic orbits. Work on a crew lander is also assumed to be underway. The Chang'e-5 mission was also seen as a scaled test run for human landings, as it followed a profile similar to NASA's Apollo missions. After lifting off from the moon, the ascent vehicle reunited and docked with a service module, much in the way that an Apollo ascent vehicle rejoined a command module in lunar orbit before the journey home. China and Russia are inviting all interested countries and partners to cooperate in the project. The initiative will be separate from the United States' Artemis moon program, however. The United States has long opposed cooperating with China in space, and recent geopolitical developments involving both Beijing and Moscow have made things worse still. As a result, China and Russia, its International Space Station partner, have looked to each other as off-world partners. "Ideally, we would have an international coalition of countries working on a lunar base, such as the Moon Village concept proposed by former ESA director-general Jan Wörner. But so far geopolitics have gotten in the way of doing that," says Brian Weeden, director of program planning for the Secure World Foundation. The final details and partners may change, but China, for its part, seems set on continuing the accumulation of expertise and technologies necessary to get to the moon and back, and stay there in the long term. This article appears in the October 2021 print issue as "China's Lunar Station Megaproject."

  • Air Quality: Easy to Measure, Tough to Fix
    by Matthew S. Smith on 22. September 2021. at 15:00

    Harry Campbell The summer of 2020 brought wildfire to Portland, Ore., as it did to so many other cities across the world. All outdoor activity in my neighborhood ceased for weeks, yet staying indoors didn't guarantee relief. The worst days left me woozy as my lone air purifier, whirring like a jet engine, failed to keep up. Obviously, the air in my home was bad. But I had no idea of how bad because, like most people, I had no way to measure it. That's changing, thanks to indoor air-quality monitors like Airthings' View Plus. Sold for US $299, the View Plus can gauge seven critical metrics: radon, particulates, carbon dioxide, humidity, temperature, volatile organic compounds, and air pressure. The monitor proved useful. I learned that cooking dinner can spike particulates into unhealthy territory for several hours, a sign that my oven vent is not working properly. The monitor also reported low levels of radon, proof that my home's radon mitigation system is doing its job. I had the monitor installed, working, and connected to the Airthings app less than 10 minutes after it arrived at my doorstep, in June. Reading the app was easy: It color-coded the results as good, fair, or poor. I have only one monitor, but the system can support multiple devices, making it possible to sniff out how air quality differs between rooms. You can also just move the device, though it needs time to update its readings. Airthings' monitor is unusual because it combines a radon sensor with other air-quality metrics, but it's certainly not alone. Alternatives are available from IQAir, Kaiterra,and Temtop, among others, and they range in price from $80 to $300. These monitors don't require permanent installation, so they're suitable for renters as well as owners. Of course, it's not enough to detect air pollutants; you must also remove them. That problem is more difficult. Ionization can itself create ozone. The state of California has banned such ozone generators entirely. Air purifiers surged in popularity through the second half of 2020 in response to dual airborne threats of COVID-19 and wildfire smoke. Companies responded to this demand at 2021's all-digital Consumer Electronics Show. LG led its presentation with personal air purifiers instead of televisions. Coway, Luft, and Scosche all showed new models, with Coway winning a CES Innovation Award for its new Design Flex purifiers. Unfortunately, consumers newly educated on indoor air quality will be puzzled about which air purifier, if any, is appropriate. Purifiers vary widely in the pollutants they claim to clean and how they claim to clean them. Most models advertise a HEPA air filter, which promises a specific standard of efficiency based on its rating, but this is often combined with unproven UV light, ionization, and ozone technologies that vaguely claim to catch toxins and kill pathogens, even COVID-19. This is the wild, wild west of air purification. It's true that an activated carbon filter can remove volatile organic compounds and ozone from the air. There's no common standard for efficiency, however, so shoppers must cross their fingers and hope for the best. Ionization, another popular feature, is no better. Studies suggest ionization can destroy viruses and bacteria in the air but, again, there's no common standard. In fact, ionization can itself create ozone. The state of California has banned such ozone generators entirely, but you'll still find these products on Amazon and other retailers. Studies even suggest the ionization feature in some purifiers may interact with the air in unpredictable ways, adding new pollutants. It's vital that companies designing air purifiers police their products and work together on standards that make sense to consumers. 2021's harsh fire season will keep demand high, but new, easy-to-use monitors like the Airthings View Plus will leave homeowners better informed about air quality—and ready to kick unproven purifiers to the curb. This article appears in the October 2021 print issue as "The Indoor Air-Quality Paradox."

  • Will This Jetpack Fly Itself?
    by Edd Gent on 22. September 2021. at 13:23

    Jetpacks might sound fun, but learning how to control a pair of jet engines strapped to your back is no easy feat. Now a British startup wants to simplify things by developing a jetpack with an autopilot system that makes operating it more like controlling a high-end drone than learning how to fly. Jetpacks made the leap from sci-fi to the real world as far back as the 1960s, but since then the they haven't found much use outside of gimmicky appearances in movies and halftime shows. In recent years though, the idea has received renewed interest. And its proponents are keen to show that the technology is no longer just for stuntmen and may even have practical applications. American firm Jetpack Aviation will teach anyone to fly its JB-10 jetpack for a cool $4,950 and recently sold its latest JB-12 model to an "undisclosed military." And an Iron Man-like, jet-powered flying suit developed by British start-up Gravity Industries has been tested as a way for marines to board ships and as a way to get medics to the top of mountains quickly. Flying jetpacks can take a lot of training to master though. That's what prompted Hollywood animatronics expert Matt Denton and Royal Navy Commander Antony Quinn to found Maverick Aviation, and develop one that takes the complexities of flight control out the pilot's hands. The Maverick Jetpack features four miniature jet turbines attached to an aluminum, titanium and carbon fiber frame, and will travel at up to 30 miles per hour. But the secret ingredient is software that automatically controls the engines to maintain a stable hover, and seamlessly convert the pilot's instructions into precise movements. "It's going to be very much like flying a drone," says Denton. "We wanted to come up with something that anyone could fly. It's all computer-controlled and you'll just be using the joystick." One of the key challenges, says Denton, was making the engines responsive enough to allow the rapid tweaks required for flight stabilization. This is relatively simple to achieve on a drone, whose electric motors can be adjusted in a blink of an eye, but jet turbines can take several seconds to ramp up and down between zero and full power. To get around this, the company added servos to each turbine that let them move independently to quickly alter the direction of thrust—a process known as thrust vectoring. By shifting the alignment of the four engines the flight control software can keep the jetpack perfectly positioned using feedback from inertial measurement units, GPS, altimeters and ground distance sensors. Simple directional instructions from the pilot can also be automatically translated into the required low-level tweaks to the turbines. It's a clever way to improve the mobility of the system, says Ben Akih-Kumgeh, an associate professor of aerospace engineering at Syracuse University. "It's not only a smart way of overcoming any lag that you may have, but it also helps with the lifespan of the engine," he adds. “[In] any mechanical system, the durability depends on how often you change the operating conditions." The software is fairly similar to a conventional drone flight controller, says Denton, but they have had to accommodate some additional complexities. Thrust magnitude and thrust direction have to be managed by separate control loops due to their very different reaction times, but they still need to sync up seamlessly to coordinate adjustments. The entire control process is also complicated by the fact that the jetpack has a human strapped to it. "Once you've got a shifting payload, like a person who's wobbling their arms around and moving their legs, then it does become a much more complex problem," says Denton. In the long run, says Denton, the company hopes to add higher-level functions that could allow the jetpack to move automatically between points marked on a map. The hope is that by automating as much of the flight control as possible, users will be able to focus on the task at hand, whether that's fixing a wind turbine or inspecting a construction site. Surrendering so much control to a computer might give some pause for thought, but Denton says there will be plenty of redundancy built in. "The idea will be that we'll have plenty of fallback modes where, if part of the system fails, it'll fall back to a more manual flight mode," he said. "The user would have training to basically tackle any of those conditions." It might be sometime before you can start basic training, though, as the company has yet to fly their turbine-powered jetpack. Currently, flight testing is being conducted on an scaled down model powered by electric ducted fans, says Denton, though their responsiveness has been deliberately dulled so they behave like turbines. The company is hoping to conduct the first human test flights next summer. Don't get your hopes up about commuting to work by jetpack any time soon though, says Akih-Kumgeh. The huge amount of noise these devices produce make it unlikely that they would be allowed to operate within city limits. The near term applications are more likely to be search and rescue missions where time and speed trump efficiency, he says.

  • DARPA SubT Final: How It Works and How to Watch
    by Evan Ackerman on 21. September 2021. at 20:22

    The preliminary rounds of the DARPA Subterranean Challenge Finals are kicking off today. It's been a little bit since the last DARPA SubT event—the Urban Circuit squeaked through right before the pandemic hit back in February of 2020, and the in-person Cave Circuit originally scheduled for later that year was canceled. So if it's been a while since you've thought about SubT, this article will provide a very brief refresher, and we'll also go through different ways in which you can follow along with the action over the course of the week. The overall idea of the DARPA Subterranean Challenge is to get teams of robots doing useful stuff in challenging underground environments. "Useful stuff" means finding important objects or stranded humans, and "challenging underground environments" includes human-made tunnel systems, the urban underground (basements, subways, etc), as well as natural caves. And "teams of robots" can include robots that drive, crawl, fly, walk, or anything in between. Over the past few years, teams of virtual and physical robots have competed in separate DARPA-designed courses representing each of those three underground domains. The Tunnel Event took place in an old coal mine, the Urban Event took place in an unfinished nuclear reactor complex, and the Cave Event—well, that got canceled because of COVID, but lots of teams found natural caves to practice in anyway. So far, we've learned that underground environments are super hard for robots. Communications are a huge problem, and robots have to rely heavily on autonomy and teamwork rather than having humans tell them what to do, although we've also seen all kinds of clever solutions to this problem. Mobility is tough, but legged robots have been surprisingly useful, and despite the exceptionally unfriendly environment, drones are playing a role in the challenge as well. Each team brings a different approach to the Subterranean Challenge, and every point scored represents progress towards robots that can actually be helpful in underground environments when we need them to be. The final Subterranean Challenge event, happening this week includes both a Virtual Track for teams competing with virtual robots, and a Systems Track for teams competing with physical robots. Let's take a look at how the final competition will work, and then the best ways to watch what's happening. How It Works If you've been following along with the previous circuits (Tunnel and Urban), the overall structure of the Final will be somewhat familiar, but there are some important differences to keep in mind. First, rather than being a specific kind of underground environment, the final course will incorporate elements from all three environments as well as some dynamic obstacles that could include things like closing doors or falling rocks. Only DARPA knows what the course looks like, and it will be reconfigured every day. Each of the Systems Track teams will have one 30-minute run on the course on Tuesday and another on Wednesday. 30 minutes is half the amount of time that teams have had in previous competitions. A Team's preliminary round score will be the sum of the scores of the two runs, but every team will get to compete in the final on Thursday no matter what their score is: the preliminary score only serves to set the team order, with higher scoring teams competing later in the final event. The final scoring run for all teams happens on Thursday. There will be one single 60 minute run for each team, which is a departure from previous events: if a team's robots misbehave on Thursday, that's just too bad, because there is no second chance. A team's score on the Thursday run is what will decide who wins the Final event; no matter how well a team did in previous events or in the preliminary runs this week, the Thursday run is the only one that counts for the prize money. Scoring works the same as in previous events. There will be artifacts placed throughout the course, made up of 10 different artifact types, like cell phones and fire extinguishers. Robots must identify the specific artifact type and transmit its location back to the starting area, and if that location is correct within 5 meters, a point is scored. Teams have a limited number of scoring attempts, though: there will be a total of 40 artifacts on the course for the prize round, but only 45 scoring attempts are allowed. And if a robot locates an artifact but doesn't manage to transmit that location back to base, it doesn't get that point. The winning team is the one with the most artifacts located in the shortest amount of time (time matters only in the event of a tie). The Virtual Track winners will take home $750k, while the top System Track team wins $2 million, with $1 million for second and $500k for third. If that's not enough of a background video for you, DARPA has helpfully provided this hour long video intro. How to Watch Watching the final event is sadly not as easy as it has been for previous events. Rather than publicly live streaming raw video feeds from cameras hidden inside the course, DARPA will instead record everything themselves and then produce edited and commentated video recaps that will post to YouTube the following day. So, Tuesday's preliminary round content will be posted on Wednesday, the Wednesday prelims post Thursday, and the Final event on Thursday will be broadcast on Friday as the teams themselves watch. Here's the schedule: The SubT Summit on Friday afternoon consists of roundtable discussions from both the Virtual Track teams and System Track teams; those will be from 2:30 to 3:30 and 4:00 to 5:00 respectively, with a half hour break in the middle. All of these streams are pre-scheduled on the DARPA YouTube channel. DARPA will also be posting daily blogs and sharing photos here. After the Thursday Final, it might be possible for us to figure out a likely winner based on artifact counts. But the idea is that even though the Friday broadcast is one day behind the competition, both we and the teams will be finding out what happened (and who won) at the same time—that's what will happen on the Friday livestream. Saturday, incidentally, has been set aside for teams to mess around on the course if they want to. This won't be recorded or broadcast at all, but I'll be there for a bit to see what happens. If you're specifically looking for a way to follow along in real time, I'm sorry to say that there isn't one. There will be real-time course feeds in the press room, but press is not allowed to share any of the things that we see. So if you're looking for details that are as close to live as possible, I'd recommend checking out Twitter, because many teams and team members are live Tweeting comments and pictures and stuff, and the easiest way to find that is by searching for the #SubTChallenge hashtag. Lastly, if you've got specific things that you'd like to see or questions for DARPA or for any of the teams, ping me on Twitter @BotJunkie and I'll happily see what I can do.

  • We Need Software Updates Forever
    by Mark Pesce on 21. September 2021. at 19:00

    Stuart Bradford I recently did some Marie Kondo–inspired housecleaning: Anything that didn't bring me joy got binned. In the process, I unearthed some old gadgets that made me smile. One was my venerable Nokia N95, a proto-smartphone, the first to sport GPS. Another was a craptastic Android tablet—a relic of an era when each year I would purchase the best tablet I could for less than $100 (Australian!), just to see how much you could get for that little. And there was my beloved Sony PlayStation Portable. While I rarely used it, I loved what the PSP represented: a high-powered handheld device, another forerunner of today's smartphone, though one designed for gaming rather than talking. These nifty antiques shared a common problem: Although each booted up successfully, none of them really work anymore. In 2014, Nokia sold off its smartphone division to Microsoft in a fire sale; then Microsoft spiked the whole effort. These moves make my N95 an orphan product from a defunct division of a massive company. Without new firmware, it's essentially useless. My craptastic tablet and PSP similarly need a software refresh. Yet neither of them can log into or even locate the appropriate update servers. You might think that a 15-year-old gaming console wouldn't even be operating, but Sony's build quality is such that, with the exception of a very tired lithium-Ion battery, the unit is in perfect condition. It runs but can't connect to modern Wi-Fi without an update, which it can't access without an update to its firmware (a classic catch-22). I've wasted a few hours trying to work out how to get new firmware on it (and on the tablet), without success. Two perfectly good pieces of electronic gear have become useless, simply for want of software updates. Device makers are apt to drop support for old gadgets faster than the gadgets themselves wear out. Consumers have relied on the good graces of device makers to keep our gadget firmware and software secure and up-to-date. Doing so costs the manufacturer some of its profits. As a result, many of them are apt to drop support for old gadgets faster than the gadgets themselves wear out. This corporate stinginess consigns far too many of our devices to the trash heap before they have exhausted their usability. That's bad for consumers and bad for the planet. It needs to stop. We have seen a global right-to-repair movement emerge from maker communities and start to influence public policy around such things as the availability of spare parts. I'd argue that there should be a parallel right-to-maintain movement. We should mandate that device manufacturers set aside a portion of the purchase price of a gadget to support ongoing software maintenance, forcing them to budget for a future they'd rather ignore. Or maybe they aren't ignoring the future so much as trying to manage it by speeding up product obsolescence, because it typically sparks another purchase. Does this mean Sony and others should still be supporting products nearly two decades old, like my PSP? If that keeps them out of the landfill, I'd say yes: The benefits easily outweigh the costs. The devilish details come in decisions about who should bear those costs. But even if they fell wholly on the purchaser, consumers would, I suspect, be willing to pay a few dollars more for a gadget if that meant reliable access to software for it—indefinitely. Yes, we all want shiny new toys—and we'll have plenty of them—but we shouldn't build that future atop the prematurely discarded remains of our electronic past. This article appears in the October 2021 print issue as "Bricked by Age."

  • How Health Care Organizations Can Thwart Cyberattacks
    by IEEE Standards Association on 21. September 2021. at 18:00

    Ransomware and other types of cyberattacks are striking health care systems at an increasing rate. More than one in three health care organizations around the world reported ransomware attacks last year, according to a survey of IT professionals by security company Sophos. About 40 percent of the nearly 330 respondents from the health care sector that weren't attacked last year said they expect to be hit in the future. In the United States, the FBI, the Cybersecurity and Infrastructure Security Agency, and the Department of Health and Human Services were so concerned with the increase in cyberattacks on hospitals and other health care providers that in October they issued a joint advisory warning of the "increased and imminent cybercrime threat." But the health care field isn't helpless against cyber threats. The IEEE Standards Association Healthcare and Life Sciences Practice—which is focused on clinical health, the biopharmaceutical value chain, and wellness—recently released Season 2 of the Re-Think Health podcast. The new season features experts from around the world who discuss measures that can help organizations minimize and even prevent attacks. The experts emphasize that cybersecurity is more than an IT concern; they say it needs to be managed from a holistic perspective, aligning employees, technology, and processes within an organization. The six episodes in Cybersecurity for Connected Healthcare Systems: A Global Perspective are as follows: Threat Modeling and Frameworks for Cybersecurity in Connected Health Care Ecosystems. This episode features Florence Hudson, executive director of the Northeast Big Data Innovation Hub. She provides an overview of several programs and initiatives by the IEEE SA Healthcare and Life Sciences Practice. Cracking the Cybersecurity Code to Accelerate Innovation: A View From Australia. Ashish Mahajan, nonexecutive director of the not-for-profit advocacy and research initiative IoTSec Australia, provides insights. He explores vulnerabilities of the data value chain in the Internet of Things ecosystem that could impede innovation in public health, wellness, and health care. Mahajan also chairs the IEEE SA IoT Ecosystem Security Industry Connections program, which aims to work with regulators to promote secure practices. Securing Greater Public Trust in Health Through Risk Mitigation: A North America Perspective. T.R. Kane, a cybersecurity, privacy, and forensics partner at PwC, explains how to strategize and how to respond to vulnerabilities. He offers strategies for managing organizational and patient risk. Uncovering the Great Risk in Security and Privacy of Health Data in Latin America and Beyond. This eye-opening conversation with cybersecurity forensic technologist Andrés Velázquez highlights common global challenges and inherent obstacles. Velázquez is founder and president of Mattica, based in Mexico City. Response and Prevention Strategy in Connected Health: A Perspective From Latin America. Roque Juarez, security intelligence specialist at IBM Mexico, explains how basic principles can be critical to cyber threat management in connected health care systems regardless of whether they are in an emerging or established economy. Roque shares how the COVID-19 pandemic increased the appeal for hackers to breach labs, health care systems, and just about any repository of patient health data and research. Cybersecurity, Trust, and Privacy in Connected Mental Health: A Perspective From Europe. The pandemic has increased the application of digital therapeutics such as mobile apps, games, and virtual reality programs for mental health conditions, according to a guidance document issued in April 2020 by the U.S. Food and Drug Administration. This episode explains opportunities and growing challenges in managing duty of care, security, and privacy with a vulnerable population of patients. MORE EPISODES Season 1 of the podcast is still available. Pain Points of Integrating New Technologies Into an Existing Healthcare Ecosystem features technologists, researchers, and ethicists discussing insights into opportunities and challenges.

  • 7 Revealing Ways AIs Fail
    by Charles Q. Choi on 21. September 2021. at 15:03

    This article is part of our special report on AI, “The Great AI Reckoning.” Artificial intelligence could perform more quickly, accurately, reliably, and impartially than humans on a wide range of problems, from detecting cancer to deciding who receives an interview for a job. But AIs have also suffered numerous, sometimes deadly, failures. And the increasing ubiquity of AI means that failures can affect not just individuals but millions of people. Increasingly, the AI community is cataloging these failures with an eye toward monitoring the risks they may pose. "There tends to be very little information for users to understand how these systems work and what it means to them," says Charlie Pownall, founder of the AI, Algorithmic and Automation Incident & Controversy Repository. "I think this directly impacts trust and confidence in these systems. There are lots of possible reasons why organizations are reluctant to get into the nitty-gritty of what exactly happened in an AI incident or controversy, not the least being potential legal exposure, but if looked at through the lens of trustworthiness, it's in their best interest to do so." Part of the problem is that the neural network technology that drives many AI systems can break down in ways that remain a mystery to researchers. "It's unpredictable which problems artificial intelligence will be good at, because we don't understand intelligence itself very well," says computer scientist Dan Hendrycks at the University of California, Berkeley. Here are seven examples of AI failures and what current weaknesses they reveal about artificial intelligence. Scientists discuss possible ways to deal with some of these problems; others currently defy explanation or may, philosophically speaking, lack any conclusive solution altogether. 1) Brittleness Chris Philpot Take a picture of a school bus. Flip it so it lays on its side, as it might be found in the case of an accident in the real world. A 2018 study found that state-of-the-art AIs that would normally correctly identify the school bus right-side-up failed to do so on average 97 percent of the time when it was rotated. "They will say the school bus is a snowplow with very high confidence," says computer scientist Anh Nguyen at Auburn University, in Alabama. The AIs are not capable of a task of mental rotation "that even my 3-year-old son could do," he says. Such a failure is an example of brittleness. An AI often "can only recognize a pattern it has seen before," Nguyen says. "If you show it a new pattern, it is easily fooled." There are numerous troubling cases of AI brittleness. Fastening stickers on a stop sign can make an AI misread it. Changing a single pixel on an image can make an AI think a horse is a frog. Neural networks can be 99.99 percent confident that multicolor static is a picture of a lion. Medical images can get modified in a way imperceptible to the human eye so medical scans misdiagnose cancer 100 percent of the time. And so on. One possible way to make AIs more robust against such failures is to expose them to as many confounding "adversarial" examples as possible, Hendrycks says. However, they may still fail against rare " black swan" events. "Black-swan problems such as COVID or the recession are hard for even humans to address—they may not be problems just specific to machine learning," he notes. 2) Embedded Bias Chris Philpot Increasingly, AI is used to help support major decisions, such as who receives a loan, the length of a jail sentence, and who gets health care first. The hope is that AIs can make decisions more impartially than people often have, but much research has found that biases embedded in the data on which these AIs are trained can result in automated discrimination en masse, posing immense risks to society. For example, in 2019, scientists found a nationally deployed health care algorithm in the United States was racially biased, affecting millions of Americans. The AI was designed to identify which patients would benefit most from intensive-care programs, but it routinely enrolled healthier white patients into such programs ahead of black patients who were sicker. Physician and researcher Ziad Obermeyer at the University of California, Berkeley, and his colleagues found the algorithm mistakenly assumed that people with high health care costs were also the sickest patients and most in need of care. However, due to systemic racism, "black patients are less likely to get health care when they need it, so are less likely to generate costs," he explains. After working with the software's developer, Obermeyer and his colleagues helped design a new algorithm that analyzed other variables and displayed 84 percent less bias. "It's a lot more work, but accounting for bias is not at all impossible," he says. They recently drafted a playbook that outlines a few basic steps that governments, businesses, and other groups can implement to detect and prevent bias in existing and future software they use. These include identifying all the algorithms they employ, understanding this software's ideal target and its performance toward that goal, retraining the AI if needed, and creating a high-level oversight body. 3) Catastrophic Forgetting Chris Philpot Deepfakes—highly realistic artificially generated fake images and videos, often of celebrities, politicians, and other public figures—are becoming increasingly common on the Internet and social media, and could wreak plenty of havoc by fraudulently depicting people saying or doing things that never really happened. To develop an AI that could detect deepfakes, computer scientist Shahroz Tariq and his colleagues at Sungkyunkwan University, in South Korea, created a website where people could upload images to check their authenticity. In the beginning, the researchers trained their neural network to spot one kind of deepfake. However, after a few months, many new types of deepfake emerged, and when they trained their AI to identify these new varieties of deepfake, it quickly forgot how to detect the old ones. This was an example of catastrophic forgetting—the tendency of an AI to entirely and abruptly forget information it previously knew after learning new information, essentially overwriting past knowledge with new knowledge. "Artificial neural networks have a terrible memory," Tariq says. AI researchers are pursuing a variety of strategies to prevent catastrophic forgetting so that neural networks can, as humans seem to do, continuously learn effortlessly. A simple technique is to create a specialized neural network for each new task one wants performed—say, distinguishing cats from dogs or apples from oranges—"but this is obviously not scalable, as the number of networks increases linearly with the number of tasks," says machine-learning researcher Sam Kessler at the University of Oxford, in England. One alternative Tariq and his colleagues explored as they trained their AI to spot new kinds of deepfakes was to supply it with a small amount of data on how it identified older types so it would not forget how to detect them. Essentially, this is like reviewing a summary of a textbook chapter before an exam, Tariq says. However, AIs may not always have access to past knowledge—for instance, when dealing with private information such as medical records. Tariq and his colleagues were trying to prevent an AI from relying on data from prior tasks. They had it train itself how to spot new deepfake types while also learning from another AI that was previously trained how to recognize older deepfake varieties. They found this "knowledge distillation" strategy was roughly 87 percent accurate at detecting the kind of low-quality deepfakes typically shared on social media. 4) Explainability Chris Philpot Why does an AI suspect a person might be a criminal or have cancer? The explanation for this and other high-stakes predictions can have many legal, medical, and other consequences. The way in which AIs reach conclusions has long been considered a mysterious black box, leading to many attempts to devise ways to explain AIs' inner workings. "However, my recent work suggests the field of explainability is getting somewhat stuck," says Auburn's Nguyen. Nguyen and his colleagues investigated seven different techniques that researchers have developed to attribute explanations for AI decisions—for instance, what makes an image of a matchstick a matchstick? Is it the flame or the wooden stick? They discovered that many of these methods "are quite unstable," Nguyen says. "They can give you different explanations every time." In addition, while one attribution method might work on one set of neural networks, "it might fail completely on another set," Nguyen adds. The future of explainability may involve building databases of correct explanations, Nguyen says. Attribution methods can then go to such knowledge bases "and search for facts that might explain decisions," he says. 5) Quantifying Uncertainty Chris Philpot In 2016, a Tesla Model S car on autopilot collided with a truck that was turning left in front of it in northern Florida, killing its driver— the automated driving system's first reported fatality. According to Tesla's official blog, neither the autopilot system nor the driver "noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied." One potential way Tesla, Uber, and other companies may avoid such disasters is for their cars to do a better job at calculating and dealing with uncertainty. Currently AIs "can be very certain even though they're very wrong," Oxford's Kessler says that if an algorithm makes a decision, "we should have a robust idea of how confident it is in that decision, especially for a medical diagnosis or a self-driving car, and if it's very uncertain, then a human can intervene and give [their] own verdict or assessment of the situation." For example, computer scientist Moloud Abdar at Deakin University in Australia and his colleagues applied several different uncertainty quantification techniques as an AI classified skin-cancer images as malignant or benign, or melanoma or not. The researcher found these methods helped prevent the AI from making overconfident diagnoses. Autonomous vehicles remain challenging for uncertainty quantification, as current uncertainty-quantification techniques are often relatively time consuming, "and cars cannot wait for them," Abdar says. "We need to have much faster approaches." 6) Common Sense Chris Philpot AIs lack common sense—the ability to reach acceptable, logical conclusions based on a vast context of everyday knowledge that people usually take for granted, says computer scientist Xiang Ren at the University of Southern California. "If you don't pay very much attention to what these models are actually learning, they can learn shortcuts that lead them to misbehave," he says. For instance, scientists may train AIs to detect hate speech on data where such speech is unusually high, such as white supremacist forums. However, when this software is exposed to the real world, it can fail to recognize that black and gay people may respectively use the words "black" and "gay" more often than other groups. "Even if a post is quoting a news article mentioning Jewish or black or gay people without any particular sentiment, it might be misclassified as hate speech," Ren says. In contrast, "humans reading through a whole sentence can recognize when an adjective is used in a hateful context." Previous research suggested that state-of-the-art AIs could draw logical inferences about the world with up to roughly 90 percent accuracy, suggesting they were making progress at achieving common sense. However, when Ren and his colleagues tested these models, they found even the best AI could generate logically coherent sentences with slightly less than 32 percent accuracy. When it comes to developing common sense, "one thing we care a lot [about] these days in the AI community is employing more comprehensive checklists to look at the behavior of models on multiple dimensions," he says. 7) Math Chris Philpot Although conventional computers are good at crunching numbers, AIs "are surprisingly not good at mathematics at all," Berkeley's Hendrycks says. "You might have the latest and greatest models that take hundreds of GPUs to train, and they're still just not as reliable as a pocket calculator." For example, Hendrycks and his colleagues trained an AI on hundreds of thousands of math problems with step-by-step solutions. However, when tested on 12,500 problems from high school math competitions, "it only got something like 5 percent accuracy," he says. In comparison, a three-time International Mathematical Olympiad gold medalist attained 90 percent success on such problems "without a calculator," he adds. Neural networks nowadays can learn to solve nearly every kind of problem "if you just give it enough data and enough resources, but not math," Hendrycks says. Many problems in science require a lot of math, so this current weakness of AI can limit its application in scientific research, he notes. It remains uncertain why AI is currently bad at math. One possibility is that neural networks attack problems in a highly parallel manner like human brains, whereas math problems typically require a long series of steps to solve, so maybe the way AIs process data is not as suitable for such tasks, "in the same way that humans generally can't do huge calculations in their head," Hendrycks says. However, AI's poor performance on math "is still a niche topic: There hasn't been much traction on the problem," he adds.

  • DARPA SubT Finals: Meet the Teams
    by Evan Ackerman on 21. September 2021. at 12:52

    This is it! This week, we're at the DARPA SubTerranean Challenge Finals in Louisville KY, where more than two dozen Systems Track and Virtual Track teams will compete for millions of dollars in prize money and being able to say "we won a DARPA challenge," which is of course priceless. We've been following SubT for years, from Tunnel Circuit to Urban Circuit to Cave (non-) Circuit. For a recent recap, have a look at this post-cave pre-final article that includes an interview with SubT Program Manager Tim Chung, but if you don't have time for that, the TLDR is that this week we're looking at both a Virtual Track as well as a Systems Track with physical robots on a real course. The Systems Track teams spent Monday checking in at the Louisville Mega Cavern competition site, and we asked each team to tell us about how they've been preparing, what they think will be most challenging, and what makes them unique. Team CERBERUS Team CERBERUS CERBERUS Country USA, Switzerland, United Kingdom, Norway Members University of Nevada, Reno ETH Zurich, Switzerland University of California, Berkeley Sierra Nevada Corporation Flyability, Switzerland Oxford Robotics Institute, United Kingdom Norwegian University for Science and Technology (NTNU), Norway Robots TBA Follow Team Website @CerberusSubt Q&A: Team Lead Kostas Alexis How have you been preparing for the SubT Final? First of all this year's preparation was strongly influenced by Covid-19 as our team spans multiple countries, namely the US, Switzerland, Norway, and the UK. Despite the challenges, we leveled up both our weekly shake-out events and ran a 2-month team-wide integration and testing activity in Switzerland during July and August with multiple tests in diverse underground settings including multiple mines. Note that we bring a brand new set of 4 ANYmal C robots and a new generation of collision-tolerant flying robots so during this period we further built new hardware. What do you think the biggest challenge of the SubT Final will be? We are excited to see how the combination of vastly large spaces available in Mega Caverns can be combined with very narrow cross-sections as DARPA promises and vertical structures. We think that terrain with steep slopes and other obstacles, complex 3D geometries, as well as the dynamic obstacles will be the core challenges. What is one way in which your team is unique, and why will that be an advantage during the competition? Our team coined early on the idea of legged and flying robot combination. We have remained focused on this core vision of ours and also bring fully own-developed hardware for both legged and flying systems. This is both our advantage and - in a way - our limitation as we spend a lot of time in its development. We are fully excited about the potential we see developing and we are optimistic that this will be demonstrated in the Final Event! Team Coordinated Robotics Team Coordinated Robotics Coordinated Robotics Country USA Members California State University Channel Islands Oke Onwuka Sequoia Middle School Robots TBA Q&A: Team Lead Kevin Knoedler How have you been preparing for the SubT Final? Coordinated Robotics has been preparing for the SubT Final with lots of testing on our team of robots. We have been running them inside, outside, day, night and all of the circumstances that we can come up with. In Kentucky we have been busy updating all of the robots to the same standard and repairing bits of shipping damage before the Subt Final. What do you think the biggest challenge of the SubT Final will be? The biggest challenge for us will be pulling all of the robots together to work as a team and make sure that everything is communicating together. We did not have lab access until late July and so we had robots at individuals homes, but were generally only testing one robot at a time. What is one way in which your team is unique, and why will that be an advantage during the competition? Coordinated Robotics is unique in a couple of different ways. We are one of only two unfunded teams so we take a lower budget approach to solving lots of the issues and that helps us to have some creative solutions. We are also unique in that we will be bringing a lot of robots (23) so that problems with individual robots can be tolerated as the team of robots continues to search. Team CoSTAR Team CoSTAR CoSTAR Country USA, South Korea, Sweden Members Jet Propulsion Laboratory California Institute of Technology Massachusetts Institute of Technology KAIST, South Korea Lulea University of Technology, Sweden Robots TBA Follow Team Website Q&A: Caltech Team Lead Joel Burdick How have you been preparing for the SubT Final? Since May, the team has made 4 trips to a limestone cave near Lexington Kentucky (and they are just finishing a week-long "game" there yesterday). Since February, parts or all of the team have been testing 2-3 days a week in a section of the abandoned Subway system in downtown Los Angeles. What do you think the biggest challenge of the SubT Final will be? That will be a tough one to answer in advance. The expected CoSTAR-specific challenges are of course the complexity of the test-site that DARPA has prepared, fatigue of the team, and the usual last-minute hardware failures: we had to have an entire new set of batteries for all of our communication nodes FedExed to us yesterday. More generally, we expect the other teams to be well prepared. Speaking only for myself, I think there will be 4-5 teams that could easily win this competition. What is one way in which your team is unique, and why will that be an advantage during the competition? Previously, our team was unique with our Boston Dynamic legged mobility. We've heard that other teams maybe using Spot quadrupeds as well. So, that may no longer be a uniqueness. We shall see! More importantly, we believe our team is unique in the breadth of the participants (university team members from U.S., Europe, and Asia). Kind of like the old British empire: the sun never sets on the geographic expanse of Team CoSTAR. Team CSIRO Data61 Team CSIRO Data61 CSIRO Data61 Country Australia, USA Members Commonwealth Scientific and Industrial Research Organisation, Australia Emesent, Australia Georgia Institute of Technology Robots TBA Follow Team Website Twitter Q&A: SubT Principal Investigator Navinda Kottege How have you been preparing for the SubT Final? Test, test, test. We've been testing as often as we can, simulating the competition conditions as best we can. We're very fortunate to have an extensive site here at our CSIRO lab in Brisbane that has enabled us to construct quite varied tests for our full fleet of robots. We have also done a number of offsite tests as well. After going through the initial phases, we have converged on a good combination of platforms for our fleet. Our work horse platform from the Tunnel circuit has been the BIA5 ATR tracked robot. We have recently added Boston Dynamics Spot quadrupeds to our fleet and we are quite happy with their performance and the level of integration with our perception and navigation stack. We also have custom designed Subterra Navi drones from Emesent. Our fleet consists of two of each of these three platform types. We have also designed and built a new 'Smart node' for communication with the Rajant nodes. These are dropped from the tracked robots and automatically deploy after a delay by extending out ground plates and antennae. As described above, we have been doing extensive integration testing with the full system to shake out bugs and make improvements. What do you think the biggest challenge of the SubT Final will be? The biggest challenge is the unknown. It is always a learning process to discover how the robots respond to new classes of obstacle; responding to this on the fly in a new environment is extremely challenging. Given the format of two preliminary runs and one prize run, there is little to no margin for error compared to previous circuit events where there were multiple runs that contributed to the final score. Any significant damage to robots during the preliminary runs would be difficult to recover from to perform in the final run. What is one way in which your team is unique, and why will that be an advantage during the competition? Our fleet uses a common sensing, mapping and navigation system across all robots, built around our Wildcat SLAM technology. This is what enables coordination between robots, and provides the accuracy required to locate detected objects. This had allowed us to easily integrate different robot platforms into our fleet. We believe this 'homogenous sensing on heterogenous platforms' paradigm gives us a unique advantage in reducing overall complexity of the development effort for the fleet and also allowing us to scale our fleet as needed. Having excellent partners in Emesent and Georgia Tech and having their full commitment and support is also a strong advantage for us. Team CTU-CRAS-NORLAB Team CTU-CRAS-NORLAB CTU-CRAS-NORLAB Country Czech Republic, Canada Members Czech Technological University, Czech Republic Université Laval, Canada Robots TBA Follow Team Website Twitter Q&A: Team Lead Tomas Svoboda How have you been preparing for the SubT Final? We spent most of the time preparing new platforms as we made a significant technology update. We tested the locomotion and autonomy of the new platforms in Bull Rock Cave, one of the largest caves in Czechia. We also deployed the robots in an old underground fortress to examine the system in an urban-like underground environment. The very last weeks were, however, dedicated to integration tests and system tuning. What do you think the biggest challenge of the SubT Final will be? Hard to say, but regarding the expected environment, the vertical shafts might be the most challenging since they are not easy to access to test and tune the system experimentally. They would also add challenges to communication. What is one way in which your team is unique, and why will that be an advantage during the competition? Not sure about the other teams, but we plan to deploy all kinds of ground vehicles, tracked, wheeled, and legged platforms accompanied by several drones. We hope the diversity of the platform types would be beneficial for adapting to the possible diversity of terrains and underground challenges. Besides, we also hope the tuned communication would provide access to robots in a wider range than the last time. Optimistically, we might keep all robots connected to the communication infrastructure built during the mission, albeit the bandwidth is very limited, but should be sufficient for artifacts reporting and high-level switching of the robots' goals and autonomous behavior. Team Explorer Team Explorer Explorer Country USA Members Carnegie Mellon University Oregon State University Robots TBA Follow Team Website Facebook Q&A: Team Co-Lead Sebastian Scherer How have you been preparing for the SubT Final? Since we expect DARPA to have some surprises on the course for us, we have been practicing in a wide range of different courses around Pittsburgh including an abandoned hospital complex, a cave and limestone and coal mines. As the finals approached, we were practicing at these locations nearly daily, with debrief and debugging sessions afterward. This has helped us find the advantages of each of the platforms, ways of controlling them, and the different sensor modalities. What do you think the biggest challenge of the SubT Final will be? For our team the biggest challenges are steep slopes for the ground robots and thin loose obstacles that can get sucked into the props for the drones as well as narrow passages. What is one way in which your team is unique, and why will that be an advantage during the competition? We have developed a heterogeneous team for SubT exploration. This gives us an advantage since there is not a single platform that is optimal for all SubT environments. Tunnels are optimal for roving robots, urban environments for walking robots, and caves for flying. Our ground robots and drones are custom-designed for navigation in rough terrain and tight spaces. This gives us an advantage since we can get to places not reachable by off-the-shelf platforms. Team MARBLE Team MARBLE MARBLE Country USA Members University of Colorado, Boulder University of Colorado, Denver Scientific Systems Company, Inc. University of California, Santa Cruz Robots TBA Follow Team Twitter Q&A: Project Engineer Gene Rush How have you been preparing for the SubT Final? Our team has worked tirelessly over the past several months as we prepare for the SubT Final. We have invested most of our time and energy in real-world field deployments, which help us in two major ways. First, it allows us to repeatedly test the performance of our full autonomy stack, and second, it provides us the opportunity to emphasize Pit Crew and Human Supervisor training. Our PI, Sean Humbert, has always said "practice, practice, practice." In the month leading up to the event, we stayed true to this advice by holding 10 deployments across a variety of environments, including parking garages, campus buildings at the University of Colorado Boulder, and the Edgar Experimental Mine. What do you think the biggest challenge of the SubT Final will be? I expect the most difficult challenge will is centered around autonomous high-level decision making. Of course, mobility challenges, including treacherous terrain, stairs, and drop offs will certainly test the physical capabilities of our mobile robots. However, the scale of the environment is so great, and time so limited, that rapidly identifying the areas that likely have human survivors is vitally important and a very difficult open challenge. I expect most teams, ours included, will utilize the intuition of the Human Supervisor to make these decisions. What is one way in which your team is unique, and why will that be an advantage during the competition? Our team has pushed on advancing hands-off autonomy, so our robotic fleet can operate independently in the worst case scenario: a communication-denied environment. The lack of wireless communication is relatively prevalent in subterranean search and rescue missions, and therefore we expect DARPA will be stressing this part of the challenge in the SubT Final. Our autonomy solution is designed in such a way that it can operate autonomously both with and without communication back to the Human Supervisor. When we are in communication with our robotic teammates, the Human Supervisor has the ability to provide several high level commands to assist the robots in making better decisions. Team Robotika Team Robotika Robotika Country Czech Republic, USA, Switzerland Members Robotika International, Czech Republic and United States Robotika.cz, Czech Republic Czech University of Life Science, Czech Republic Centre for Field Robotics, Czech Republic Cogito Team, Switzerland Robots Two wheeled robots Follow Team Website Twitter Q&A: Team Lead Martin Dlouhy How have you been preparing for the SubT Final? Our team participates in both Systems and Virtual tracks. We were using the virtual environment to develop and test our ideas and techniques and once they were sufficiently validated in the virtual world, we would transfer these results to the Systems track as well. Then, to validate this transfer, we visited a few underground spaces (mostly caves) with our physical robots to see how they perform in the real world. What do you think the biggest challenge of the SubT Final will be? Besides the usual challenges inherent to the underground spaces (mud, moisture, fog, condensation), we also noticed the unusual configuration of the starting point which is a sharp downhill slope. Our solution is designed to be careful about going on too steep slopes so our concern is that as things stand, the robots may hesitate to even get started. We are making some adjustments in the remaining time to account for this. Also, unlike the environment in all the previous rounds, the Mega Cavern features some really large open spaces. Our solution is designed to expect detection of obstacles somewhere in the vicinity of the robot at any given point so the concern is that a large open space may confuse its navigational system. We are looking into handling such a situation better as well. What is one way in which your team is unique, and why will that be an advantage during the competition? It appears that we are unique in bringing only two robots into the Finals. We have brought more into the earlier rounds to test different platforms and ultimately picked the two we are fielding this time as best suited for the expected environment. A potential benefit for us is that supervising only two robots could be easier and perhaps more efficient than managing larger numbers.

  • Solar and Battery Companies Rattle Utility Powerhouses
    by Michael Dumiak on 20. September 2021. at 15:15

    All eyes these days may be on Elon Musk's space venture—which has just put people in orbit—but here on Earth you can now get your monthly electric bill courtesy of a different Musk enterprise. Tesla and its partner Octopus Energy Germany recently rolled out retail utility services in two large German states. It's being marketed as the "Tesla Energy Plan," and is available to any individual household in this region of 24 million people that has a solar panel system, a grid connection—and a Tesla powerwall, the Palo Alto firm's gigafactory-made 13.5 kWh battery wall unit. The German initiative comes on the heels of a similar rollout through Octopus Energy last November in the United Kingdom. It's too soon to say if these are the nascent strands of a "giant distributed utility," an expression Musk has long talked up, the meaning of which is not yet clear. Analysts and power insiders sketch scenes including interconnected local renewable grids that draw on short-duration battery storage (including the small batteries in electric vehicles in a garage, models for which Tesla just happens to make) combined with multi-day storage for power generated by wind and solar. For bigger national grids it gets more complicated. Even so, Tesla also now has gear on the market that institutional battery storage developers can use to run load-balancing trade operations: the consumer won't see those, but it's part of ongoing changes as renewables become more important in the power game. Being able to get a Tesla-backed power bill in the mailbox, though—that's grabbing attention. And more broadly speaking, the notion of what is and isn't a utility is in flux. "Over the last five to 10 years we have seen an uptick in new entrants providing retail energy services," says Albert Cheung, head of global analysis at BloombergNEF. "It is now quite common to see these types of companies gain significant market share without necessarily owning any of their own generation or network assets at all." A decade ago it became possible to get your electricity in the UK from a department store chain (though with the actual power supplied first by a Scottish utility and—as of 2018—arranged and managed by Octopus Energy). As Tesla and other makers of home energy storage systems ramp up production for modular large-scale lithium-ion batteries that can be stacked together in industrial storage facilities, new wrinkles are coming to the grid. "There are simply going to be more and different business models out there," Cheung says. "There is going to be value in distributed energy resources at the customer's home; Whether that is a battery, an electric vehicle charger, a heat pump or other forms of flexible load, and managing these in a way that provides value to the grid will create revenue opportunities." Tesla Gigafactory site taking shape in Grünheide, Germany in June 2021. It is due to open in late 2021 or early 2022. Michael Dumiak Tesla the battery-maker, with its giant new production plant nearing completion in Berlin, may be in position to supply a variety of venues with its wall-sized and cargo-container-sized units: As it does so, its controversial bet in first backing and then absorbing panel producer Solar City may start to look a little different. Harmony Energy seems pretty pleased. The UK-based energy developer's just broken ground on a new four-acre battery storage site outside London, its third such site. Its second just came online with 68 MWh storage capacity and a 34 MW peak, with the site comprising 28 Tesla Megapack batteries. Harmony expects to be at over a gigawatt of live, operating output in the next three to four years. The Harmony enterprise works with the UK national grid, however—that's different from Octopus's German and UK retail initiatives. Both Harmony and Octopus depend on trading and energy network management software platforms, and Tesla works with both. But while Octopus has its own in-house management platform—Kraken—Harmony engages Tesla's Autobidder. Peter Kavanagh, Harmony's CEO, says his firm pays Tesla to operate Autobidder on its behalf—Tesla is fully licensed to trade in the UK and is an approved utility there. The batteries get charged when power is cheap; when there's low wind and no sun, energy prices may start to spike, and the batteries can discharge the power back into the grid, balancing the constant change of supply and demand, and trading on the difference to make a business. A load-balancing trading operation is not quite the same as mainlining renewables to light a house. On any national grid, once the energy is in there, it's hard to trace the generating source—some of it will come from fossil fuels. But industrial-scale energy storage is crucial to any renewable operation: the wind dies down, the sun doesn't always shine. "Whether it's batteries or some other energy storage technology, it is key to hitting net zero carbon emissions," Kavanagh says. "Without it, you are not going to get there." Battery research and development is burgeoning far beyond Tesla, and the difficult hunt is on to move past lithium ion. And it's not just startups and young firms in the mix: Established utility giants—the Pacific Gas & Electrics of the world, able to generate as well as retail power—are also adding battery storage, and at scale. In Germany, the large industrial utility RWE started its own battery unit and is now operating small energy storage sites in Germany and in Arizona. Newer entrants, potential energy powerhouses, are on the rise in Italy, Spain and Denmark. The Tesla Energy plan does have German attention though, of media and energy companies alike. It's also of note that Tesla is behind the very large battery at Australia's Hornsdale Power Reserve. One German pundit imagined Octopus's Kraken management platform as a "monstrous octopus with millions of tentacles," linking a myriad of in-house electric storage units to form a huge virtual power plant. That would be something to reckon with.

  • Help Build the Future of Assistive Technology
    by California State University, Northridge on 20. September 2021. at 12:09

    This article is sponsored by California State University, Northridge (CSUN). Your smartphone is getting smarter. Your car is driving itself. And your watch tells you when to breathe. That, as strange as it might sound, is the world we live in. Just look around you. Almost every day, there's a better or more convenient version of the latest gadget, device, or software. And that's only on the commercial end. The medical and rehabilitative tech is equally impressive — and arguably far more important. Because for those with disabilities, assistive technologies mean more than convenience. They mean freedom. So, what is an assistive technology (AT), and who designs it? The term might be new to you, but you're undoubtedly aware of many: hearing aids, prosthetics, speech-recognition software (Hey, Siri), even the touch screen you use each day on your cell phone. They're all assistive technologies. AT, in its most basic form, is anything that helps a person achieve enhanced performance, improved function, or accelerated access to information. A car lets you travel faster than walking; a computer lets you process data at an inhuman speed; and a search engine lets you easily find information. CSUN Master of Science in Assistive Technology Engineering The fully online M.S. in Assistive Technology Engineering program can be completed in less than two years and allows you to collaborate with other engineers and AT professionals. GRE is not required and financial aid is available. Request more information about the program here. That's the concept – in a simplified form, of course. The applications, however, are vast and still expanding. In addition to mechanical products and devices, the field is deeply involved in artificial intelligence, machine learning, and neuroscience. Brain machine interfaces, for instance, allow users to control prosthetics with thought alone; and in some emergency rooms, self-service kiosks can take your blood pressure, pulse and weight, all without any human intervention. These technologies, and others like them, will only grow more prevalent with time – as will the need for engineers to design them. Those interested in the field typically enter biomedical engineering programs. These programs, although robust in design, focus often on hardware, teaching students how to apply engineering principles to medicine and health care. What many lack, however, is a focus on the user. But that's changing. Some newer programs, many of them certificates, employ a more user-centric model. One recent example is the Master of Science in Assistive Technology Engineering at California State University, Northridge (CSUN). The degree, designed in collaboration with industry professionals, is a hybrid of sorts, focusing as much on user needs as on the development of new technologies. CSUN, it should be noted, is no newcomer to the field. For more than three decades, the university has hosted the world's largest assistive technology conference. To give you an idea, this year's attendees included Google, Microsoft, Hulu, Amazon, and the Central Intelligence Agency. The university is also home to a sister degree, the Master of Science in Assistive Technology and Human Services, which prepares graduates to assist and train AT users. As you can imagine, companies are aggressively recruiting engineers with this cross-functional knowledge. Good UX design is universally desired, as it's needed for both optimal function and, often, ADA compliance. In addition to mechanical devices, the field of Assistive Technology is deeply involved in artificial intelligence, machine learning, and neuroscience The field has implications in war as well – both during and after. Coming as no surprise, the military is investing heavily in AT hardware and research. Why? On the most basic level, the military is interested in rehabilitating combat veterans. Assistive technologies, such as prosthetic limbs, enable those wounded in combat to pursue satisfying lives in the civilian world. Beyond that, assistive technology is a core part of the military's long-term strategic plan. Wearable electronics, such as VR headsets and night vision goggles, both fit within the military's expanding technological horizon, as do heads-up displays, exoskeletons and drone technologies. The Future of Assistive Technology So, what does the future have in store for AT? We'll likely see more and better commercial technologies designed for entertainment. Think artificial realities with interactive elements in the real world (a whale floating by your actual window, not a simulated one). Kevin Kelly of Wired Magazine refers to this layered reality as the "Mirrorworld." And according to him, it's going to spark the next tech platform. Imagine Facebook in the Matrix... Or, come to think of it, don't. An increasing number of mobile apps, such as those able to detect Parkinson's disease, will also hit the market. As will new biomedical hardware, like brain and visual implants. Fortunately, commercial innovations often drive medical ones as well. And as we see an uptick in entertainment, we'll see an equal surge in medicine, with new technologies – things we haven't even considered yet – empowering those in need. Help build the future of assistive technology! Visit CSUN's Master of Science in Assistive Technology Engineering site to learn more about the program or request more information here.

  • Video Friday: Preparing for the SubT Final
    by Evan Ackerman on 17. September 2021. at 15:23

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!): DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – [Online Event] IROS 2021 – September 27-1, 2021 – [Online Event] Robo Boston – October 1-2, 2021 – Boston, MA, USA WearRAcon Europe 2021 – October 5-7, 2021 – [Online Event] ROSCon 2021 – October 20-21, 2021 – [Online Event] Silicon Valley Robot Block Party – October 23, 2021 – Oakland, CA, USA Let us know if you have suggestions for next week, and enjoy today's videos. Team Explorer, the SubT Challenge entry from CMU and Oregon State University, is in the last stage of preparation for the competition this month inside the Mega Caverns cave complex in Louisville, Kentucky. [ Explorer ] Team CERBERUS is looking good for the SubT Final next week, too. Autonomous subterranean exploration with the ANYmal C Robot inside the Hagerbach underground mine [ ARL ] I'm still as skeptical as I ever was about a big and almost certainly expensive two-armed robot that can do whatever you can program it to do (have fun with that) and seems to rely on an app store for functionality. [ Unlimited Robotics ] Project Mineral is using breakthroughs in artificial intelligence, sensors, and robotics to find ways to grow more food, more sustainably. [ Mineral ] Not having a torso or anything presumably makes this easier. Next up, Digit limbo! [ Hybrid Robotics ] Paric completed layout of a 500 unit apartment complex utilizing the Dusty FieldPrinter solution. Autonomous layout on the plywood deck saved weeks worth of schedule, allowing the panelized walls to be placed sooner. [ Dusty Robotics ] Spot performs inspection in the Kidd Creek Mine, enabling operators to keep their distance from hazards. [ Boston Dynamics ] Digit's engineered to be a multipurpose machine. Meaning, it needs to be able to perform a collection of tasks in practically any environment. We do this by first ensuring the robot's physically capable. Then we help the robot perceive its surroundings, understand its surroundings, then reason a best course of action to navigate its environment and accomplish its task. This is where software comes into play. This is early AI in action. [ Agility Robotics ] This work proposes a compact robotic limb, AugLimb, that can augment our body functions and support the daily activities. The proposed device can be mounted on the user's upper arm, and transform into compact state without obstruction to wearers. [ AugLimb ] Ahold Delhaize and AIRLab need the help of academics who have knowledge of human-robot interactions, mobility, manipulation, programming, and sensors to accelerate the introduction of robotics in retail. In the AIRLab Stacking challenge, teams will work on algorithms that focus on smart retail applications, for example, automated product stacking. [ PAL Robotics ] Leica, not at all well known for making robots, is getting into the robotic reality capture business with a payload for Spot and a new drone. Introducing BLK2FLY: Autonomous Flying Laser Scanner [ Leica BLK ] As much as I like Soft Robotics, I'm maybe not quite as optimistic as they are about the potential for robots to take over quite this much from humans in the near term. [ Soft Robotics ] Over the course of this video, the robot gets longer and longer and longer. [ Transcend Robotics ] This is a good challenge: attach a spool of electrical tape to your drone, which can unpredictably unspool itself and make sure it doesn't totally screw you up. [ UZH ] Two interesting short seminars from NCCR Robotics, including one on autonomous racing drones and "neophobic" mobile robots. Dario Mantegazza: Neophobic Mobile Robots Avoid Potential Hazards [ NCCR ] This panel on Synergies between Automation and Robotics comes from ICRA 2021, and once you see the participant list, I bet you'll agree that it's worth a watch. [ ICRA 2021 ] CMU RI Seminars are back! This week we hear from Andrew E. Johnson, a Principal Robotics Systems Engineer in the Guidance and Control Section of the NASA Jet Propulsion Laboratory, on "The Search for Ancient Life on Mars Began with a Safe Landing." Prior mars rover missions have all landed in flat and smooth regions, but for the Mars 2020 mission, which is seeking signs of ancient life, this was no longer acceptable. Terrain relief that is ideal for the science obviously poses significant risks for landing, so a new landing capability called Terrain Relative Navigation (TRN) was added to the mission. This talk will describe the scientific goals of the mission, the Terrain Relative Navigation system design and the successful results from landing on February 18th, 2021. [ CMU RI Seminar ]

  • China’s Mars Helicopter to Support Future Rover Exploration
    by Andrew Jones on 17. September 2021. at 15:17

    The first ever powered flight by an aircraft on another planetary took place in April when NASA's Ingenuity helicopter, delivered to the Red Planet along with Perseverance rover, but the idea has already taken off elsewhere. Earlier this month a prototype "Mars surface cruise drone system" developed by a team led by Bian Chunjiang at China's National Space Science Center (NSSC) in Beijing gained approval for further development. Like Ingenuity, which was intended purely as a technology demonstration, it uses two sets of blades on a single rotor mast to provide lift for vertical take-offs and landings in the very thin Martian atmosphere, which is around 1% the density of Earth's. The team did consider a fixed wing approach, which other space-related research institutes in China have been developing, but found the constraints related to size, mass, power and lift best met by the single rotor mast approach. Solar panels charge Ingenuity's batteries enough to allow one 90-second flight per Martian day. The NSSC team is however considering adopting wireless charging through the rover, or a combination of both power systems. The total mass is 2.1 kilograms, slightly heavier than the 1.8-kg Ingenuity. It would fly at an altitude of 5-10 meters, reaching speeds of around 300 meters per minute, with a possible duration of 3 minutes per flight. Limitations include energy consumption and temperature control. According to an article published by China Science Daily, Bian proposed development of a helicopter to help guide a rover in March 2019, which was then accepted in June that year. The idea is that by imaging areas ahead the rover could then better select routes which avoid the otherwise unseen areas that restrict and pose challenges to driving. The small craft's miniature multispectral imaging system may also detect scientifically valuable targets, such as evidence of notable compounds, that would otherwise be missed, deliver preliminary data and direct the rover for more detailed observations. The next steps, Bian said, will be developing the craft so as to be able to operate in the very low atmospheric pressure and frigid temperatures of Mars as well as the dust environment and other complex environmental variables. Bian also notes that to properly support science and exploration goals the helicopter design life must be at least a few months or even beyond a year on Mars. To properly test the vehicle, these conditions will have to be simulated here on Earth. Bian says China does not currently have facilities that can meet all of the parameters. Faced with similar challenges for Ingenuity, Caltech graduate students built a custom wind tunnel for testing, and the NSSC team may likewise need to take a bespoke approach. "The next 5 to 6 years are a window for research." Bian said. "We hope to overcome these technical problems and allow the next Mars exploration mission to carry a drone on Mars." When the Mars aircraft could be deployed on Mars is unknown. China's first Mars rover landed in May, but there is no backup vehicle, unlike its predecessor lunar rover missions. The country's next interplanetary mission is expected to be a complex and unprecedented Mars sample-return launching around 2028-2030. Ingenuity's first flight was declared by NASA to be a "Wright Brothers moment." Six years after the 1903 Wright Flyer, Chinese-born Feng Ru successfully flew his own biplane. Likewise, in the coming years, China will be looking to carry out its own powered flight on another planet.

  • New Fuel Cell Tech Points Toward Zero-Emission Trains
    by Michelle Hampson on 17. September 2021. at 13:00

    This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore. Diesel and steam-powered trains have been transporting passengers and cargo around the world for more than 200 years—all the while releasing greenhouse gas emissions into the atmosphere. In the hopes of a greener future, many countries and companies are eyeing more renewable sources of locomotion. The Pittsburgh-based company Wabtec recently unveiled a battery-electric hybrid train that they say can reduce emissions "by double digits per train." More ambitiously, some are developing hydrogen-powered trains, which rather than emitting greenhouse gases, only produce water vapor and droplets. The technology has the potential to help countries meet greenhouse gas reduction targets and slow the progression of climate change. But, producing electricity from hydrogen comes with its own challenges. For example, the fuel cells require additional heavy converters to manage their wide voltage range. The weight of these bulky converters ultimately reduces the range of the train. In a recent advancement, researchers in the UK have designed a new converter that is substantially lighter and more compact than state-of-the art hydrogen cell converters. They describe the new design in study published August 25 in IEEE Transactions on Industrial Electronics. Pietro Tricoli, a professor at the University of Birmingham, was involved in the study. He notes that lighter converters are needed to help maximize the range that hydrogen powered trains can travel. Therefore his team developed the newer, lighter converter, which they describe in their paper as "ground-breaking." It uses semiconductor devices to draw energy in a controlled way from the fuel cells and deliver it to the train's motors. "Our converter directly manages any voltage variations in the fuel cells, without affecting the motor currents. A conventional system would require two separate converters to achieve this," explains Tricoli. With the power converted to AC, the motors of a train can benefit from regenerative braking, whereby energy is harvested and recycled when the train is decelerating. The researchers first tested their design through simulations, and then developed validated it through a small-scale laboratory prototype representing the traction system of a train. The results confirm that the new converter can facilitate desirable speeds and accelerations, as well as achieve regenerative braking. Left: A prototype of the new hydrogen cell converter. Right: A module used at the heart of the converter.Ivan Krastev "The main strength of the converter is the reduction of volume and weight comparted to the state of the art [converters for hydrogen fuel cells]," explains Tricoli. The main drawback, he says, is that the new converter design requires more semiconductor devices, as well as more complex circuitry and monitoring systems. Tricoli says there's still plenty of work ahead optimizing the system, ultimately, toward a full-scale prototype. "The current plan is to engage with train manufacturers and manufacturers of traction equipment to build a second [prototype] for a hydrogen train," he says. This past spring marked an exciting milestone when, upon the completion of a 538-day trial period, two hydrogen-powered trains successfully transported passengers across 180,000 kilometers in Germany—while emitting zero vehicle emissions. As more advancements in hydrogen technology such as the above are made, more increasingly efficient hydrogen-powered trains become possible. All aboard!

  • Q&A With Co-Creator of the 6502 Processor
    by Stephen Cass on 16. September 2021. at 18:00

    Few people have seen their handiwork influence the world more than Bill Mensch. He helped create the legendary 8-bit 6502 microprocessor, launched in 1975, which was the heart of groundbreaking systems including the Atari 2600, Apple II, and Commodore 64. Mensch also created the VIA 65C22 input/output chip—noted for its rich features and which was crucial to the 6502's overall popularity—and the second-generation 65C816, a 16-bit processor that powered machines such as the Apple IIGS, and the Super Nintendo console. Many of the 65x series of chips are still in production. The processors and their variants are used as microcontrollers in commercial products, and they remain popular among hobbyists who build home-brewed computers. The surge of interest in retrocomputing has led to folks once again swapping tips on how to write polished games using the 6502 assembly code, with new titles being released for the Atari, BBC Micro, and other machines. Mensch, an IEEE senior life member, splits his time between Arizona and Colorado, but folks in the Northeast of the United States will have the opportunity to see him as a keynote speaker at the Vintage Computer Festival in Wall, N.J., on the weekend of 8 October. In advance of Mensch's appearance, The Institute caught up with him via Zoom to talk about his career. This interview had been condensed and edited for clarity. The Institute: What drew you into engineering? Bill Mensch: I went to Temple University [in Philadelphia] on the recommendation of a guidance counselor. When I got there I found they only had an associate degree in engineering technology. But I didn't know what I was doing, so I thought: Let's finish up that associate degree. Then I got a job [in 1967] as a technician at [Pennsylvania TV maker] Philco-Ford and noticed that the engineers were making about twice as much money. I also noticed I was helping the engineers figure out what Motorola was doing in high-voltage circuits—which meant that Motorola was the leader and Philco was the follower. So I went to the University of Arizona, close to where Motorola was, got my engineering degree [in 1971] and went to work for Motorola. TI: How did you end up developing the 6502? BM: Chuck Peddle approached me. He arrived at Motorola two years after I started. Now, this has not been written up anywhere that I'm aware of, but I think his intention was to raid Motorola for engineers. He worked with me on the peripheral interface chip (PIA) and got to see me in action. He decided I was a young, egotistical engineer who was just the right kind to go with his ego. So Chuck and I formed a partnership of sorts. He was the system engineer, and I was the semiconductor engineer. We tried to start our own company [with some other Motorola engineers] and when that didn't happen, we joined an existing [semiconductor design] company, called MOS Technology, in Pennsylvania in 1974. That's where we created the 6501 and 6502 [in 1975], and I designed the input/output chips that went with it. The intention was to [develop a US $20 microprocessor to] compete with the Intel 4040 microcontroller chipset, which sold for about $29 at the time. We weren't trying to compete with the 6800 or the 8080 [chips designed for more complex microcomputer systems]. TI: The 6502 did become the basis of a lot of microcomputer systems, and if you look at contemporary programmer books, they often talk about the quirks of the 6502's architecture and instruction set compared with other processors. What drove those design decisions? BM: Rod Orgill and I had completed the designs of a few microprocessors before the 6501/6502. In other words, Rod and I already knew what was successful in an instruction set. And lower cost was key. So we looked at what instructions we really needed. And we figured out how to have addressable registers by using zero page [the first 256 bytes in RAM]. So you can have one byte for the op code and one byte for the address, and [the code is compact and fast]. There are limitations, but compared to other processors, zero page was a big deal. There is a love for this little processor that's undeniable. TI: A lot of pages in those programming books are devoted to explaining how to use the versatile interface adapter (VIA) chip and its two I/O ports, on-board timers, a serial shift register, and so on. Why so many features? BM: I had worked on the earlier PIA chip at Motorola. That meant I understood the needs of real systems in real-world implementations. [While working at MOS] Chuck, Wil Mathis, our applications guy, and I were eating at an Arby's one day, and we talked about doing something beyond the PIA. And they were saying, "We'd like to put a couple of timers on it. We'd like a serial port," and I said, "Okay, we're going to need more register select lines." And our notes are on an Arby's napkin. And I went off and designed it. Then I had to redesign it to make it more compatible with the PIA. I also made a few changes at Apple's request. What's interesting about the VIA is that it's the most popular chip we sell today. I'm finding out more and more about how it was used in different applications. TI: After MOS Technology, in 1978 you founded The Western Design Center, where you created the 65C816 CPU. The creators of the ARM processor credit a visit to WDC as giving them the confidence to design their own chip. Do you remember that visit? BM: Vividly! Sophie Wilson and Steve Furber visited me and talked to me about developing a 32-bit chip. They wanted to leapfrog what Apple was rumored to be up to. But I was just finishing up the '816, and I didn't want to change horses. So when they [had success with the ARM] I was cheering them on because it wasn't something I wanted to do. But I did leave them with the idea of, "Look, if I can do it here … there are two of you; there's one of me." TI: The 6502 and '816 are often found today in other forms, either as the physical core of a system-on-a-chip, or running on an FPGA. What are some of the latest developments? BM: I'm excited about what's going on right now. It's more exciting than ever. I was just given these flexible 6502s printed with thin films by PragmatIC! Our chips are in IoT devices, and we have new educational boards coming out. TI: Why do you think the original 65x series is still popular, especially among people building their own personal computers? BM: There is a love for this little processor that's undeniable. And the reason is we packed it with love while we were designing it. We knew what we were doing. Rod and I knew from our previous experience with the Olivetti CPU and other chips. And from my work with I/O chips, I knew [how computers were used] in the real world. People want to work with the 65x chips because they are accessible. You can trust the technology.

  • Spot’s 3.0 Update Adds Increased Autonomy, New Door Tricks
    by Evan Ackerman on 15. September 2021. at 22:32

    While Boston Dynamics' Atlas humanoid spends its time learning how to dance and do parkour, the company's Spot quadruped is quietly getting much better at doing useful, valuable tasks in commercial environments. Solving tasks like dynamic path planning and door manipulation in a way that's robust enough that someone can buy your robot and not regret it is, I would argue, just as difficult (if not more difficult) as getting a robot to do a backflip. With a short blog post today, Boston Dynamics is announcing Spot Release 3.0, representing more than a year of software improvements over Release 2.0 that we covered back in May of 2020. The highlights of Release 3.0 include autonomous dynamic replanning, cloud integration, some clever camera tricks, and a new ability to handle push-bar doors, and earlier today, we spoke with Spot Chief Engineer at Boston Dynamics Zachary Jackowski to learn more about what Spot's been up to. Here are some highlights from Spot's Release 3.0 software upgrade today, lifted from this blog post which has the entire list: Mission planning: Save time by selecting which inspection actions you want Spot to perform, and it will take the shortest path to collect your data. Dynamic replanning: Don't miss inspections due to changes on site. Spot will replan around blocked paths to make sure you get the data you need. Repeatable image capture: Capture the same image from the same angle every time with scene-based camera alignment for the Spot CAM+ pan-tilt-zoom (PTZ) camera. Cloud-compatible: Connect Spot to AWS, Azure, IBM Maximo, and other systems with existing or easy-to-build integrations. Manipulation: Remotely operate the Spot Arm with ease through rear Spot CAM integration and split-screen view. Arm improvements also include added functionality for push-bar doors, revamped grasping UX, and updated SDK. Sounds: Keep trained bystanders aware of Spot with configurable warning sounds. The focus here is not just making Spot more autonomous, but making Spot more autonomous in some very specific ways that are targeted towards commercial usefulness. It's tempting to look at this stuff and say that it doesn't represent any massive new capabilities. But remember that Spot is a product, and its job is to make money, which is an enormous challenge for any robot, much less a relatively expensive quadruped. For more details on the new release and a general update about Spot, we spoke with Zachary Jackowski, Spot Chief Engineer at Boston Dynamics. IEEE Spectrum: So what's new with Spot 3.0, and why is this release important? Zachary Jackowski: We've been focusing heavily on flexible autonomy that really works for our industrial customers. The thing that may not quite come through in the blog post is how iceberg-y making autonomy work on real customer sites is. Our blog post has some bullet points about "dynamic replanning" in maybe 20 words, but in doing that, we actually reengineered almost our entire autonomy system based on the failure modes of what we were seeing on our customer sites. The biggest thing that changed is that previously, our robot mission paradigm was a linear mission where you would take the robot around your site and record a path. Obviously, that was a little bit fragile on complex sites—if you're on a construction site and someone puts a pallet in your path, you can't follow that path anymore. So we ended up engineering our autonomy system to do building scale mapping, which is a big part of why we're calling it Spot 3.0. This is state-of-the-art from an academic perspective, except that it's volume shipping in a real product, which to me represents a little bit of our insanity. And one super cool technical nugget in this release is that we have a powerful pan/tilt/zoom camera on the robot that our customers use to take images of gauges and panels. We've added scene-based alignment and also computer vision model-based alignment so that the robot can capture the images from the same perspective, every time, perfectly framed. In pictures of the robot, you can see that there's this crash cage around the camera, but the image alignment stuff actually does inverse kinematics to command the robot's body to shift a little bit if the cage is including anything important in the frame. When Spot is dynamically replanning around obstacles, how much flexibility does it have in where it goes? There are a bunch of tricks to figuring out when to give up on a blocked path, and then it's very simple run of the mill route planning within an existing map. One of the really big design points of our system, which we spent a lot of time talking about during the design phase, is that it turns out in these high value facilities people really value predictability. So it's not desired that the robot starts wandering around trying to find its way somewhere. Do you think that over time, your customers will begin to trust the robot with more autonomy and less predictability? I think so, but there's a lot of trust to be built there. Our customers have to see the robot to do the job well for a significant amount of time, and that will come. Can you talk a bit more about trying to do state-of-the-art work on a robot that's being deployed commercially? I can tell you about how big the gap is. When we talk about features like this, our engineers are like, "oh yeah I could read this paper and pull this algorithm and code something up over a weekend and see it work." It's easy to get a feature to work once, make a really cool GIF, and post it to the engineering group chat room. But if you take a look at what it takes to actually ship a feature at product-level, we're talking person-years to have it reach the level of quality that someone is accustomed to buying an iPhone and just having it work perfectly all the time. You have to write all the code to product standards, implement all your tests, and get everything right there, and then you also have to visit a lot of customers, because the thing that's different about mobile robotics as a product is that it's all about how the system responds to environments that it hasn't seen before. The blog post calls Spot 3.0 "A Sensing Solution for the Real World." What is the real world for Spot at this point, and how will that change going forward? For Spot, 'real world' means power plants, electrical switch yards, chemical plants, breweries, automotive plants, and other living and breathing industrial facilities that have never considered the fact that a robot might one day be walking around in them. It's indoors, it's outdoors, in the dark and in direct sunlight. When you're talking about the geometric aspect of sites, that complexity we're getting pretty comfortable with. I think the frontiers of complexity for us are things like, how do you work in a busy place with lots of untrained humans moving through it—that's an area where we're investing a lot, but it's going to be a big hill to climb and it'll take a little while before we're really comfortable in environments like that. Functional safety, certified person detectors, all that good stuff, that's a really juicy unsolved field. Spot can now open push-bar doors, which seems like an easier problem than doors with handles, which Spot learned to open a while ago. Why'd you start with door handles first?Push-bar doors is an easier problem! But being engineers, we did the harder problem first, because we wanted to get it done.

  • Will iPhone 13 Trigger Headaches and Nausea?
    by Tekla S. Perry on 15. September 2021. at 14:25

    Tim Cook is "so excited for iPhone 13." I'm not, because yet again, Apple's latest and greatest tech sits behind an OLED display. And OLEDs, for some of us, cause nausea, headaches, or worse. I explain why Apple's OLED displays, that dim by flickering on and off rather than by voltage adjustments, trigger health issues here. The iPhone 13 series, launched Tuesday, has cool video features, like automatically changing focus on the fly. The phones have longer battery lives. They have better processors. But it doesn't come with an LCD option, the second generation that's OLED only. Watching the livestream of the iPhone 13 intro event this week, I had a moment of hope, albeit one that could be a little hard on the budget. The OLED screens on the iPhone 13 Pro models (starting at $999 for the Pro, $1099 for the Pro Max) sport a refresh rate of 120 Hz, instead of 60 Hz of other models. The rate of the flicker—the pulse width modulation (PWM) is typically four times the refresh rate, and the slower the flicker the worse the effects on the sensitive, so a higher refresh rate could potentially translate to higher frequency PWM, and trigger problems in fewer people. However, these new screens aren't designed to always run at 120 Hz. They will adjust their refresh rate depending on the content, Apple's executives explained, with movies and games running at the highest speed and more static things like photos and email at far slower rates, as low as 10 Hz. (Reducing the refresh rate extends battery life.) So it's hard to say whether this new display is better or worse for the motion sensitive. It's possible that Apple will offer a user option to lock the refresh rate at 120 Hz in spite of the hit on battery life, no word yet from Apple on that, and I won't really know if that will help unless I try it. Will my motion sensitivity force me to fall further and further behind as Apple's phone technology advances? Apple's September announcements did suggest a possible path. Perhaps my next phone shouldn't be a phone, but rather an iPad Mini. I'd have to back off on a few things I consider essential in a phone—that I could hold it in one hand comfortably and fit it in my back jeans pocket; at 5.3 by 7.69 inches the Mini is a little big for that. But Apple's new mini packs in much of the same technologies as its top-of-the-line iPhone 13s—the A15 Bionic chip, Center Stage software to automatically keep the subjects in the screen during video calls, and 5G communications, all behind an LCD, not an OLED, display.And oooh, that wisteria purple!

  • Competing Visions Underpin China’s Quantum Computer Race
    by Craig S. Smith on 15. September 2021. at 14:20

    China and the US are in a race to conquer quantum computing, which promises to unleash the potential of artificial intelligence and give the owner all-seeing, code-breaking powers. But there is a race within China itself among companies trying to dominate the space, led by tech giants Alibaba and Baidu. Like their competitors IBM, Google, Honeywell, and D-Wave, both Chinese companies profess to be developing "full stack" quantum businesses, offering access to quantum computing through the cloud coupled with their own suite of algorithms, software, and consulting services. Alibaba is building solutions for specific kinds of hardware, as IBM, Google, and Honeywell are doing. (IBM's software stack will also support trapped ion hardware, but the company's focus is on supporting its superconducting quantum computers. Honeywell's software partner, Cambridge Quantum, is hardware agnostic, but the two companies' cooperation is focused on Honeywell's trapped ion computer.) Baidu is different in that it is building a hardware-agnostic software stack that can plug into any quantum hardware, whether that hardware uses a superconducting substrate, nuclear magnetic resonance, or ion traps to control its qubits. "Currently we don't do hardware directly, but develop the hardware interface," Runyao Duan, Baidu's head of quantum computing, told the 24th Annual Conference on Quantum Information Processing earlier this year. "This is a very flexible strategy and ensures that we will be open for all hardware providers." Quantum computers calculate using the probability that an array of entangled quantum particles is in a particular state at any point in time. Maintaining and manipulating the fragile particles is itself a difficult problem that has yet to be solved at scale. Quantum computers today consist of fewer than 100 qubits, though hardware leader IBM has a goal of reaching 1,000 qubits by 2023. But an equally thorny problem is how to use those qubits once they exist. "We can build a qubit. We can manipulate a qubit and we can read a qubit," said Mattia Fiorentini, head of machine learning and quantum algorithms at Cambridge Quantum in London. "The question is, how do you build software that can really benefit from all that information processing power?" Scientists around the world are working on ways to program quantum computers that are useful and generalized and that engineers can use pretty much straight out of the box. Of course, real large-scale quantum computing remains a relatively distant dream—currently quantum cloud services are primarily used for simulations of quantum computing using classical computers, although some are using small quantum systems—and so it's too early to say whether Baidu's strategy will pay off. “We can build a qubit. We can read a qubit. But how do you build software that can really benefit from all that information processing power?" In the past, Alibaba worked with the University of Science and Technology of China in Hefei, the capital of central China's Anhui province, which currently has the world's most advanced quantum computer, dubbed the Zuchongzhi 2.1, after China's famous fifth century astronomer who first calculated pi to six decimal places. The company is also building quantum computing hardware of its own. China's most important quantum scientist, Pan Janwei, also worked for Alibaba as scientific advisor. Earlier this year, Pan's team set a new milestone in quantum computation with the 66-qubit Zuchongzhi 2.1. Pan and his team ran a calculation on the device in about an hour and a half, which would take the world's fastest supercomputer an estimated eight years to complete. Baidu, meanwhile, has been releasing a series of platforms and tools that it hopes will put it ahead when quantum computers eventually become large enough and stable enough to be practical. Last year, it announced a new cloud-based quantum computing platform called Quantum Leaf, which it bills as the first cloud-native quantum computing platform in China—a bit of semantics apparently intended to put it ahead of Alibaba's cloud division, which began offering a cloud-based quantum platform with the Chinese Academy of Sciences several years ago. Unlike Alibaba's platform, Quantum Leaf's cloud programming environment provides quantum-infrastructure-as-a-service. Baidu's cloud-native quantum computing platform Quantum Leaf provides access to the superconducting quantum processing unit from the Institute of Physics, Chinese Academy of Sciences. Baidu also released Paddle Quantum, a device-independent platform for building and training quantum neural network models for advanced quantum computing applications. It combines AI and quantum computing using the company's deep learning framework called PaddlePaddle—Paddle means PArallel, Distributed, Deep Learning—which has 3.6 million developers and can support hyperscale training models with trillions of parameters. Paddle Quantum, in turn, can be used to develop quantum neural network models for software solutions. Users can then deploy those models on both quantum processing units or simulators through Quantum Leaf. Baidu's quantum activities are largely focused on quantum artificial intelligence, an extension of Baidu's current artificial intelligence activities. Baidu also offers a "cloud-based quantum pulse computing service" called Quanlse, intended to bridge the gap between hardware and software through sequences of pulses that can control quantum hardware and reduce quantum error, one of the biggest challenges in quantum computing. "We see an increasing number of demands from universities and companies to use our quantum platform and collaborat[e] on quantum solutions, [which] is an essential part of our quantum ecology," a Baidu spokesperson said. Baidu's quantum activities are largely focused on quantum artificial intelligence, an extension of Baidu's current artificial intelligence activities. Quantum computing is expected to accelerate the development of artificial intelligence both by making models faster but also by allowing compute-intensive models not currently possible on classical computers. The company established a quantum computing institute in 2018 whose research includes classification of quantum data, which opens the door to quantum machine learning. To classify chemical compounds as toxic or non-toxic, for example, data scientists currently use classical means. But because the underlying data—the molecules and their configurations—is quantum data, it would be faster and more accurate to classify that quantum data directly with a quantum computer. Quantum information is encoded in the probability distribution of qubit states. That probability distribution is reconstructed by collecting samples with classical means, but the number of samples needed grows exponentially as you add qubits. "The more you add qubits to your quantum system, the more powerful the system, but the more samples you need to take to extract all useful information," says Cambridge Quantum's Fiorentini. Existing methods for quantum classification are impractical because hardware and infrastructure limitations restrict the complexity of the datasets that can be applied. Baidu researchers' new hybrid quantum-classical framework for supervised quantum learning uses what they call the “shadows" of quantum data as a subroutine to extract significant features—where “shadows" here refers to a method for approximating classical descriptions of a quantum state using relatively few measurements of the state. "If we can get all the key information out of the quantum computer with a very small number of samples without sacrificing information, that's significant," says Fiorentini. Baidu's hybrid quantum-classical framework, meanwhile, sharply reduces the number of parameters, making quantum machine learning models training easier and less compute intensive. In the near term, the company says, Baidu is pursuing more efficient and more powerful classical computing resources that can accelerate its AI applications, from training large-scale models to inferencing on the cloud or edge. In 2018, it developed a cross-architecture AI chip called Kunlun, named or the mountain range on the Tibetan plateau that is the mythological origin of Chinese civilization. Baidu has produced more than 20,000 14-nm Kunlun chips for use in its search engine, Baidu AI cloud and other applications. It recently announced the mass production of Kunlun II, which offers 2-3 times better performance than the previous generation, using the world's leading 7nm process and built on Baidu's own second-generation cross-platform architecture. Kunlun II has a lower peak power consumption while offering significantly better performance in AI training and inferencing. The chip can be applied in multiple scenarios, including in the cloud, on terminal, and at the edge, powering high-performance computer clusters used in biocomputing and autonomous driving.

  • Electric Motor Enables Chain-Free Bike-by-Wire
    by Michael Dumiak on 15. September 2021. at 13:59

    An increasingly-seen sight in Berlin and other German cities is the oversized electric cargo delivery bike, hissing along (and sometimes in bike lanes) like parcel-laden sailboats on appointed Amazon rounds. German manufacturer Schaeffler sees an opportunity: it is introducing a new generator at the heart of a smart drivetrain concept that some observers are calling bike-by-wire. It's a bike with no chain. Schaeffler's e-motor assembly was among the more out-of-the-ordinary items on display at the recent IAA Mobility show in Munich, which used to be the Frankfurt Motor Show, and more accustomed to roaring supercars and sleek news Benzes (and a thronging public, in pre-Covid times). But in some ways Schaeffler's pedal-cranked generator looked familiar; it's the world around it that's changing. That just might include reimagining the 130-year-old chain-driven bicycle. Schaeffler is working with German electric drive maker and systems integrator Heinzmann to develop a complete bike-by-wire drivetrain. The partners had a prototype on display in Munich (and the previous week at Eurobike) with a robust cargo three-wheel e-bike made by Bayk. Production models could come out as soon as first-quarter 2022, says Marc Hector, an engineer in Schaeffler's actuator systems unit and one of the developers on the pedal generator project. It's a hard thing to beat pedal-turns-sprocket. But maybe conditions are changing. Bike by wire physically de-links two kinetic systems: the turning pedals and the powering wheel on a bike. They are instead linked by a controller, an electronic brain, which directs power to either battery or hub motor. It also sends a resistance signal to the pedal, so the rider feels that he or she is pushing against something. Instead of producing motion, pedaling is producing current. Taking the chain out of the mix—if done successfully—would fly open the cramped world of cycle design to new shapes and configurations. Remove the electronic brain, however, and you're left with a stationary exercise bike bolted to a wheeled frame powered by rear electric motors. No wonder industrial designers and engineers have toiled for years on the concept: it's a hard thing to beat pedal-turns-sprocket. But maybe conditions are changing. Schaeffler's pedal-powered generator enables new, chainless e-bike designsSchaeffler Schaeffler is an auto parts and industrial manufacturer which made its name as a ball-bearing and clutch maker. It's developed electro-mobility products for 20 years, but has been on a buying spree: snapping up an engineering specialist firm in e-drives and another in the winding technologies used, among other things, to superefficiently wrap copper wire inside electric motors. It launched a new e-mobility business division that, reports Automotive News Europe, includes 48-volt auto powertrains as well as subsystems for plug-ins and full-electric vehicles. Here it's a different scale of electrics: Schaeffler's pedal generator is a self-contained four-kilo crank-driven e-machine in a cylindrical housing the shape of an oversized soup can placed in the bottom bracket of a cargo bike. The pedals turn the crank running through a standard brushless DC machine inside: fixed coil copper windings around an iron core are arranged within the cylinder as the generator stator. Magnets in the turning rotor create the current. Temperature sensors and a controller are housed along with the generator. The bike-by-wire controllers direct that current where needed: to the onboard battery for charging, to the interface display, to the rear hub traction motors that propel the bike, and back to the rider's feet in the form of counter-torque, giving the feeling of resistance when pedaling. The trick will be by synching it all up via Controller Area Network (CAN) bus, a 500 kbits/sec messaging standard which simplifies the amount of cabling needed. It should move the bike on one hand, and independently send the "chain feeling" back to the rider. Move pedal, move bike. “Pedal by wire has huge potential. Micromobility is coming." "The feeling should be the same as when there is a chain there," says Thorsten Weyer, a Schaeffler system architect. "But there is no chain." Propelling the bike will be the two Heinzmann hub motors, which the controller can get rolling set at European Union specs at 125 watts of power each, 250 total (500 watts in mountainous Switzerland, 600 in Austria). Each hub can each generate 113 newton-meters of torque on the axle, powering it ahead. "With the hub motor you have power where you need it," says Heinzmann electric drives managing director Peter Mérimèche. The controller's programmed with nine gear settings: the countercurrent controlling torque on the axle is reduced or increased automatically based on the grade the bike is traveling on. Designers have dreamed of chainless bikes for more than a century—in analogue form—and at least 25 years for e-bikes, as Andreas Fuchs, a Swiss physicist and engineer, developed his first chainless working models in the mid-90s. Challenges remain. Han Goes, a Dutch consultant and bicycle designer, worked with a Korean auto supplier a decade ago on a personal portable chainless folding bike. Pedaling parameters proved a struggle. "The man and the machine, the cyclist and the generator, the motor: nothing should feel disruptive," he says. If so, the rider feels out of step. "It is like you are pedaling somewhere into empty space." Goes is still at it, working with design partners on a new chainless cargo bike. Our parcels keep needing delivery, and the city is changing. "Pedal by wire has huge potential. Micromobility is coming," he says. Dutch and Danish and other developers are at it, too. "It offers design and engineering freedom. Simplicity. Less parts and maintenance. Traditional chain drives can never offer that."

  • Rule of the Robots: Warning Signs
    by Martin Ford on 14. September 2021. at 18:00

    A few years ago, Martin Ford published a book called Architects of Intelligence, in which he interviewed 23 of the most experienced AI and robotics researchers in the world. Those interviews are just as fascinating to read now as they were in 2018, but Ford's since had some extra time to chew on them, in the context of a several years of somewhat disconcertingly rapid AI progress (and hype), coupled with the economic upheaval caused by the pandemic. In his new book, Rule of the Robots: How Artificial Intelligence Will Transform Everything, Ford takes a markedly well-informed but still generally optimistic look at where AI is taking us as a society. It's not all good, and there are still a lot of unknowns, but Ford has a perspective that's both balanced and nuanced, and I can promise you that the book is well worth a read. The following excerpt is a section entitled "Warning Signs," from the chapter "Deep Learning and the Future of Artificial Intelligence." —Evan Ackerman The 2010s were arguably the most exciting and consequential decade in the history of artificial intelligence. Though there have certainly been conceptual improvements in the algorithms used in AI, the primary driver of all this progress has simply been deploying more expansive deep neural networks on ever faster computer hardware where they can hoover up greater and greater quantities of training data. This "scaling" strategy has been explicit since the 2012 ImageNet competition that set off the deep learning revolution. In November of that year, a front-page New York Times article was instrumental in bringing awareness of deep learning technology to the broader public sphere. The article, written by reporter John Markoff, ends with a quote from Geoff Hinton: "The point about this approach is that it scales beautifully. Basically you just need to keep making it bigger and faster, and it will get better. There's no looking back now." There is increasing evidence, however, that this primary engine of progress is beginning to sputter out. According to one analysis by the research organization OpenAI, the computational resources required for cutting-edge AI projects is "increasing exponentially" and doubling about every 3.4 months. In a December 2019 Wired magazine interview, Jerome Pesenti, Facebook's Vice President of AI, suggested that even for a company with pockets as deep as Facebook's, this would be financially unsustainable: When you scale deep learning, it tends to behave better and to be able to solve a broader task in a better way. So, there's an advantage to scaling. But clearly the rate of progress is not sustainable. If you look at top experiments, each year the cost [is] going up 10-fold. Right now, an experiment might be in seven figures, but it's not going to go to nine or ten figures, it's not possible, nobody can afford that. Pesenti goes on to offer a stark warning about the potential for scaling to continue to be the primary driver of progress: "At some point we're going to hit the wall. In many ways we already have." Beyond the financial limits of scaling to ever larger neural networks, there are also important environmental considerations. A 2019 analysis by researchers at the University of Massachusetts, Amherst, found that training a very large deep learning system could potentially emit as much carbon dioxide as five cars over their full operational lifetimes. Even if the financial and environmental impact challenges can be overcome—perhaps through the development of vastly more efficient hardware or software—scaling as a strategy simply may not be sufficient to produce sustained progress. Ever-increasing investments in computation have produced systems with extraordinary proficiency in narrow domains, but it is becoming increasingly clear that deep neural networks are subject to reliability limitations that may make the technology unsuitable for many mission critical applications unless important conceptual breakthroughs are made. One of the most notable demonstrations of the technology's weaknesses came when a group of researchers at Vicarious, small company focused on building dexterous robots, performed an analysis of the neural network used in Deep-Mind's DQN, the system that had learned to dominate Atari video games. One test was performed on Breakout, a game in which the player has to manipulate a paddle to intercept a fast-moving ball. When the paddle was shifted just a few pixels higher on the screen—a change that might not even be noticed by a human player—the system's previously superhuman performance immediately took a nose dive. DeepMind's software had no ability to adapt to even this small alteration. The only way to get back to top-level performance would have been to start from scratch and completely retrain the system with data based on the new screen configuration. What this tells us is that while DeepMind's powerful neural networks do instantiate a representation of the Breakout screen, this representation remains firmly anchored to raw pixels even at the higher levels of abstraction deep in the network. There is clearly no emergent understanding of the paddle as an actual object that can be moved. In other words, there is nothing close to a human-like comprehension of the material objects that the pixels on the screen represent or the physics that govern their movement. It's just pixels all the way down. While some AI researchers may continue to believe that a more comprehensive understanding might eventually emerge if only there were more layers of artificial neurons, running on faster hardware and consuming still more data, I think this is very unlikely. More fundamental innovations will be required before we begin to see machines with a more human-like conception of the world. This general type of problem, in which an AI system is inflexible and unable to adapt to even small unexpected changes in its input data, is referred to, among researchers, as "brittleness." A brittle AI application may not be a huge problem if it results in a warehouse robot occasionally packing the wrong item into a box. In other applications, however, the same technical shortfall can be catastrophic. This explains, for example, why progress toward fully autonomous self-driving cars has not lived up to some of the more exuberant early predictions. As these limitations came into focus toward the end of the decade, there was a gnawing fear that the field had once again gotten over its skis and that the hype cycle had driven expectations to unrealistic levels. In the tech media and on social media, one of the most terrifying phrases in the field of artificial intelligence—"AI winter"—was making a reappearance. In a January 2020 interview with the BBC, Yoshua Bengio said that "AI's abilities were somewhat overhyped . . . by certain companies with an interest in doing so." My own view is that if another AI winter indeed looms, it's likely to be a mild one. Though the concerns about slowing progress are well founded, it remains true that over the past few years AI has been deeply integrated into the infrastructure and business models of the largest technology companies. These companies have seen significant returns on their massive investments in computing resources and AI talent, and they now view artificial intelligence as absolutely critical to their ability to compete in the marketplace. Likewise, nearly every technology startup is now, to some degree, investing in AI, and companies large and small in other industries are beginning to deploy the technology. This successful integration into the commercial sphere is vastly more significant than anything that existed in prior AI winters, and as a result the field benefits from an army of advocates throughout the corporate world and has a general momentum that will act to moderate any downturn. There's also a sense in which the fall of scalability as the primary driver of progress may have a bright side. When there is a widespread belief that simply throwing more computing resources at a problem will produce important advances, there is significantly less incentive to invest in the much more difficult work of true innovation. This was arguably the case, for example, with Moore's Law. When there was near absolute confidence that computer speeds would double roughly every two years, the semiconductor industry tended to focus on cranking out ever faster versions of the same microprocessor designs from companies like Intel and Motorola. In recent years, the acceleration in raw computer speeds has become less reliable, and our traditional definition of Moore's Law is approaching its end game as the dimensions of the circuits imprinted on chips shrink to nearly atomic size. This has forced engineers to engage in more "out of the box" thinking, resulting in innovations such as software designed for massively parallel computing and entirely new chip architectures—many of which are optimized for the complex calculations required by deep neural networks. I think we can expect the same sort of idea explosion to happen in deep learning, and artificial intelligence more broadly, as the crutch of simply scaling to larger neural networks becomes a less viable path to progress. Excerpted from "Rule of the Robots: How Artificial Intelligence will Transform Everything." Copyright 2021 Basic Books. Available from Basic Books, an imprint of Hachette Book Group, Inc.

  • IEEE Power & Energy Society President Dies at 69
    by Joanna Goodrich on 14. September 2021. at 18:00

    Frank Lambert IEEE Power & Energy Society president Life senior member, 69; died 27 July Lambert was the IEEE Power & Energy Society's 2020–2021 president. An active member of the society since 1982, he held several positions on its governing board, including region representative and vice president of chapters. He also served on its switch gear committee. He worked at Georgia Power in Atlanta for more than 20 years, and he was a principal research engineer at the National Electric Energy Testing Research and Applications Center at Georgia Tech for more than 25 years, becoming NEETRAC's associate director. Lambert was a longtime supporter of the IEEE PES Scholarship Plus Initiative. He also championed IEEE Smart Village, a program that brings electricity—as well as educational and employment opportunities—to remote communities. He had earned bachelor's and master's degrees in electrical engineering at Georgia Tech. Mason Lamar Williams III Codeveloper of the Williams-Comstock formula Life Fellow, 78; died 28 June Williams joined IBM in San Jose, Calif., in 1970 and spent his entire 32-year career there. He helped develop the Williams-Comstock formula, a critical design tool for magnetic recording systems. When Williams first joined IBM, he worked with Richard "Larry" Comstock, an IBM engineering manager, to characterize and test experimental magnetite film media. Together they developed the formula, which identifies factors that limit hard-disk storage capacity. He also guided the development of thin-film disk drives, according to his biography on the Engineering and Tech History Wiki. He managed several magnetic recording projects during his career. Williams was granted 27 U.S. patents during his time at IBM. He was an active member of the IEEE Magnetics Society and received its 2007 Johnson Storage Device Technology Award. In 2006 he became a distinguished lecturer and spoke about his work in Asia, Europe, and the United States. After retiring from IBM in 2002, Williams volunteered at the Computer History Museum, in Mountain View, Calif. While there, he restored the world's first hard drive—the IBM RAMAC—which is on display at the museum. He earned his bachelor's degree in engineering at Caltech and obtained a Ph.D. in electrical engineering from the University of Southern California, in Los Angeles. Jan Abraham "Braham" Ferreira Past president of the IEEE Power Electronics Society Fellow, 62; died 16 May Ferreira was the 2015–2016 president of the IEEE Power Electronics Society. An expert in power electronic converters, electrical machines, and novel grid components, he spent almost his entire career in academia, conducting research in power electronics. Ferreira's first job, in 1981, was at the Institute of Power Electronics and Electric Drives at Aachen University, in Germany. He worked there for a year before joining ESD Australia, in Cloverdale, as a systems engineer. He left in 1985 to join the Rand Afrikaans University, now part of the University of Johannesburg. In 1998 he immigrated to the Netherlands to serve as chair of the power electronics laboratory at the Delft University of Technology. In 2006 he was promoted to head of the department. Eleven years later, he became director of the Delft-Beijing Institute of Intelligent Science and Technology. In 2019 he joined the University of Twente, in Enschede, Netherlands, as a professor of electrical engineering. He established the Shenzen-Twente power electronics research program there. Its goal is to address key challenges of transitioning from fossil fuels to renewable energy, including battery storage integration, improving power quality, universal energy access, and increasing efficiency and reliability. Ferreira authored or coauthored 130 journal and transactions articles and more than 400 conference papers. He was granted 15 patents. He founded the IEEE Empower a Billion Lives global competition in 2018 to crowdsource ideas that could improve energy access in underserved communities. Ferreira served as 2020 chair of the IEEE PELS International Technology Roadmap on Wide Bandgap Power Semiconductors. He received several recognitions including this year's IEEE PELS Owen Distinguished Service Award, the 2017 IEEE Industry Applications Society's Outstanding Achievement Award, and that society's 2014 Kliman Innovator Award. He earned his bachelor's degree, master's degree, and doctorate in electrical engineering from Rand Afrikaans in 1981, 1983, and 1988. Jack Minker Database and programming pioneer Life Fellow, 94; died 9 April Minker was a pioneer in deductive databases, a data analysis system, and in disjunctive logic programming, a set of logic rules and constraints that can be used when creating a database. He developed the generalized closed-world assumption, a theoretical basis for computer systems and programming languages. After a career in industry working for Auerbach Engineering, Bell Aircraft, and RCA, he joined the University of Maryland in College Park in 1967 as a computer science professor. He became the first chair of the computer science department four years later and was named professor emeritus in 1998. From 1973 until his death, he served as vice chairman of the Committee of Concerned Scientists. From 1980 to 1989, he was vice chairman of the Association for Computing Machinery's Committee on Scientific Freedom and Human Rights. Minker earned a bachelor's degree from Brooklyn College, in New York City, in 1949; a master's degree from the University of Wisconsin-Madison in 1950; and a Ph.D. from the University of Pennsylvania, in Philadelphia, in 1959.

  • Portable Analyzer Brings Blood Testing to Rural Areas
    by Michelle Hampson on 14. September 2021. at 16:37

    Blood tests are vital for detecting and monitoring disease, but they are most often done near more populated areas, where the samples can be analyzed in a laboratory. Seeing the need for a more transportable system that can analyze blood samples in rural and remote areas, two researchers in India have developed a new design that is simple, affordable, and easily deployed anywhere where a source of electricity is available. Sangeeta Palekar is a researcher at Shri Ramdeobaba College of Engineering and Management (RCOEM) who helped devise the new design. She and her colleague, Jayu Kalambe, understand how powerful a simple blood test can be. "Routine blood tests can help track and eliminate the threat of many potential diseases," explains Palekar, noting that blood tests make up roughly one-third of all pathology laboratory tests. Many existing devices in the laboratory use light to analyze blood sample. As light passes through a substance, its intensity changes depending on the concentration of the substance it is passing through. In this way, levels of red bloods cells or glucose, for example, can be quantified. The new analyzer by Palekar and Kalambe takes a similar approach. It involves an automated fluid dispenser that adds a controlled amount of reagent into the blood sample. Light is then passed through the sample, and a Raspberry Pi computer analyzes the data. The system can be adapted to analyze any biochemical substances in the blood by simply modifying the reagent and spectral wavelength that's used. The researchers began by using commercially available reagent kit for analyzing glucose levels. They tested this reagent in their new design, and describe the results in a study published August 27 in IEEE Sensors Journal. When comparing the data obtained by their biochemical analyzer to the known results obtain by standard laboratory equipment, they found the data matched almost perfectly. What's more, the device could yield accurate results in just half a minute. This prototype offers a cheap way to analyze blood samples remotely. Shri Ramdeobaba College of Engineering and Management Palekar notes there are a lot of perks to this design. "The developed platform offers the advantages of automation, low cost, portability, simple instrumentation, flexibility, and an easily accessible interface," she says. "Overall, the proposed framework is an attractive solution to be incorporated in the low resource area as a universal platform for all biochemistry analysis simply by varying the wavelength of light and reagent." As a next step, the women are interested in expanding upon the different types of blood analyses that can be done, for example to analyze proteins, cholesterol, triglyceride, albumin, and other common substances in the blood that are medically important. Palekar notes that the hardware could be further simplified with the right software solutions. As well, she envisions incorporating an IoT platform into the design, which could be helpful for remote monitoring.

  • Faster Microfiber Actuators Mimic Human Muscle
    by Payal Dhar on 14. September 2021. at 15:22

    Robotics, prosthetics, and other engineering applications routinely use actuators that imitate the contraction of animal muscles. However, the speed and efficiency of natural muscle fibers is a demanding benchmark. Despite new developments in actuation technologies, for the most past artificial muscles are either too large, too slow, or too weak. Recently, a team of engineers from the University of California San Diego (UCSD) have described a new artificial microfiber made from liquid crystal elastomer (LCE) that replicates the tensile strength, quick responsiveness, and high power density of human muscles. "[The LCE] polymer is a soft material and very stretchable," says Qiguang He, the first author of their research paper. "If we apply external stimuli such as light or heat, this material will contract along one direction." Though LCE-based soft actuators are common and can generate excellent actuation strain—between 50 and 80 percent—their response time, says He, is typically "very, very slow." The simplest way to make the fibers both responsive and fast was to reduce their diameter. To do so, the UCSD researchers used a technique called electrospinning, which involves the ejection of a polymer solution through a syringe or spinneret under high voltage to produce ultra-fine fibers. Electrospinning is used for the fabrication of small-scale materials, to produce microfibers with diameters between 10 and 100 micrometers. It is favored for its ability to create fibers with different morphological structures, and is routinely used in various research and commercial contexts. The microfibers fabricated by the UCSD researchers were between 40 and 50 micrometers, about the width of human hair, and much smaller than existing LCE fibers, some of which can be more than 0.3 millimeters thick. "We are not the first to use this technique to fabricate LCE fibers, but we are the first…to push this fiber further," He says. "We demonstrate how to control the actuation of the [fibers and measure their] actuation performance." University of California, San Diego/Science Robotics As proof-of-concept, the researchers constructed three different microrobotic devices using their electrospun LCE fibers. Their LSE actuators can be controlled thermo-electrically or using a near-infrared laser. When the LCE material is at room temperature, it is in a nematic phase: He explains that in this state, "the liquid crystals are randomly [located] with all their long axes pointing in essentially the same direction." When the temperature is increased, the material transitions into what is called an isotropic phase, in which its properties are uniform in all directions, resulting in a contraction of the fiber. The results showed an actuation strain of up to 60 percent—which means, a 10-centimeter-long fiber will contract to 4 centimeters—with a response speed of less than 0.2 seconds, and a power density of 400 watts per kilogram. This is comparable to human muscle fibers. An electrically controlled soft actuator, the researchers note, allows easy integrations with low-cost electronic devices, which is a plus for microrobotic systems and devices. Electrospinning is a very efficient fabrication technique as well: "You can get 10,000 fibers in 15 minutes," He says. That said, there are a number of challenges that need to be addressed still. "The one limitation of this work is…[when we] apply heat or light to the LCE microfiber, the energy efficiency is very small—it's less than 1 percent," says He. "So, in future work, we may think about how to trigger the actuation in a more energy-efficient way." Another constraint is that the nematic–isotropic phase transition in the electrospun LCE material takes place at a very high temperature, over 90 C. "So, we cannot directly put the fiber into the human body [which] is at 35 degrees." One way to address this issue might be to use a different kind of liquid crystal: "Right now we use RM 257 as a liquid crystal [but] we can change [it] to another type [to reduce] the phase transition temperature." He, though, is optimistic about the possibilities to expand this research in electrospun LCE microfiber actuators. "We have also demonstrated [that] we can arrange multiple LCE fibers in parallel…and trigger them simultaneously [to increase force output]… This is a future work [in which] we will try to see if it's possible for us to integrate these muscle fiber bundles into biomedical tissue."

  • Could Sucking Up the Seafloor Solve Battery Shortage?
    by Prachi Patel on 13. September 2021. at 16:00

    Reeling from a crushing shortage of semiconductor chips for vehicles, carmakers also face another looming crisis: producing enough batteries to drive the global pivot towards electric vehicles. The supply of metals like cobalt, copper, lithium, and nickel needed for batteries is already shaky, and soaring demand for the hundreds of millions of batteries in the coming decades is likely to trigger shortage and high prices. Some companies want to harvest metallic treasures from the sea. Strewn across large swaths of ocean plains some 5,000 meters deep are potato-like lumps called polymetallic nodules rich in metals and rare-earth elements critical for batteries and electronics. Nodules in the Clarion-Clipperton Zone (CCZ), which stretches between Mexico and Hawaii, are estimated to contain more cobalt and nickel than there are in deposits on land. The Metals Company (previously DeepGreen Metals) in Vancouver expects to be the first to commercially produce metals from these nodules by 2024. And CEO Gerard Barron is confident they can do this without harming critical subsea ecosystems. The nodules sit on top of the seafloor, so there is no drilling or digging needed. The company's robotic collector will inch along the seafloor, shooting out jets of seawater at the nodules, gently dislodging and suctioning them up. "It's like picking up golf balls on a driving range," says CFO Craig Shesky. A ship will take the nodules to an onshore processing plant, where they will be smelted to get nickel sulfate, cobalt sulfate, copper and manganese. Texas is top of The Metals Company's list for the processing plant given the state's ports and access to cheap renewables. "We are committed to turning those rocks into metal using renewable power and with zero solid waste," Shesky says. Raw materials noduleThe Metals Company Agencies from seventeen nations have exploration contracts in the CCZ from the International Seabed Authority. The Metals Company has teamed up with three of those, from the tiny Pacific island nations of Kiribati, Nauru and Tonga, to access 150,000 square kilometers that, Shesky says, "have sufficient copper, nickel and cobalt to electrify the world's vehicle fleet several times over." Land-based mining is already fraught with environmental destruction, emissions, human rights abuses, and mountains of waste, as well as precarious global supply chains. The Democratic Republic of Congo produces 70 percent of the world's cobalt, and most of the world's nickel sits under Indonesian rainforests. China processes about 80 percent of battery raw materials, creating a chokehold on global supplies. And with much of the world's high-grade resources already spent, companies have turned to low-grade mining resources that produce more waste and emissions. "There will be a nickel deficit of 40 percent by the end of decade, even higher than copper," Shesky says. "We don't want to have happen with EVs what happened with the semiconductor shortage this year. The question is where should you go to get that metal? Let's go to the desert of the sea, the deep-sea abyssal plains, the parts of the world with least life as opposed to most life like the rainforest. There is 1500 times less life per square meter in these areas than in rainforests." But while they might have low biomass, they also have astounding biodiversity. Craig Smith, an oceanography professor at the University of Hawaii at Manoa, who has led seven research expeditions to the CCZ. Deep-sea plains are sensitive, pristine ecosystems untouched by humans and their value is hard to assess. "Most of the species we bring up during these studies are new to science. We actually think it's a biodiversity hotspot." So ocean mining could hurt, maybe annihilate, species we don't even know about yet, Smith says. Sediment plumes that the mining zones create could affect creatures living hundreds of kilometers away. And the nodules themselves are habitat to thousands of microorganisms. "It's not possible to mine polymetallic nodules from the seafloor on a commercial scale without causing substantial ecological damage over tens of thousands of kilometers," he says. Shesky points out though, that 70 percent of the life in these regions is bacteria, as opposed to the diversity found in the rainforest. A recent study by mechanical engineers at MIT has shown that the detrimental impacts of sediment plumes generated by collector vehicles and by the water-sediment mixture returned into the sea from ships after separating the nodules might be exaggerated. The sediments settle down or dilute back to background levels quickly. Another study has shown that producing metals from nodules would create a tenth of the carbon dioxide emissions as that from land ores. Even so, there's a lot of opposition to mining the deep-sea floor for resources. BMW, Google, Samsung, and Volvo have all said they will not buy metals mined from such sources until the environmental impacts are better understood. The companies have all signed a World Wildlife Fund moratorium to that effect. As an extra precaution to ensure oversight and minimal disruption to these deep-ocean residents, The Metals Company will use drones and subsea sensors to monitor nodule-collection in real-time and beam it to stakeholders and regulators. "If there is impact to creature that we didn't anticipate, we can change our plan," he says.The company last September awarded University of Hawaii at Manoa marine biologist Jeff Drazen US $2.9 million to assess the impacts of deep-sea mining in the CCZ.

  • Cheap Sensors for Smarter Farmers
    by Karen Kwon on 12. September 2021. at 15:14

    Demonstrating that we are truly living in an era of "smart agriculture," many of the technologies showcased in this year's ARPA-E Summit were in the farming sector—most notably, sensors for crops and farmlands. Just like the smart devices that enable us to monitor our health every minute of the day, these agricultural sensors allow farmers to monitor plant and soil conditions close to real-time. Here are two that caught this writer's eyes. First up is a 3D-printed, biodegradable soil sensor that checks moisture and nitrogen levels. One of the benefits of using print electronics is being able to mass-produce at a low cost, says Gregory Whiting at the University of Colorado, Boulder, one of the principal investigators of the team working on the sensors. "Agriculture is a pretty cost constrained industry," Whiting says, and 3D-printed sensors allow farmers to place many sensors throughout their large farmlands—often hundreds of acres—without spending a ton of money. And this enables the farmers to monitor soil conditions in greater detail, Whiting says. Depending on factors such as how the sun hits the ground, the amount of water or the fertilizer needed could vary patch by patch. Traditional sensors were too expensive for the farmers to buy in large quantities, and, as a result, the special resolution wasn't high enough to reflect this variability. With the new, cheap sensors, farmers will be able to collect data on their farms without worrying about the variability. One problem with mass-producing sensors, however, is that it creates a lot of waste. That's why Whiting and his colleagues decided on using biodegradable materials, ­­such as zinc and wood. But this solution also posed a challenge: What if the sensor degrades before the job is done? Whiting and his team solved this issue by encapsulating the sensor parts using beeswax or soy wax. The protective wax casing ensures moisture and nitrogen sensing parts, made from zinc, to operate properly for the desired amount of time, typically a few months until the crops fully grow. And by the end of that period, the casing would start to break down, and the sensor would degrade. Whiting says that the sensor signal will be transmitted via long-range RFID—like those used in toll roads—collected using a drone or farm equipment with a reader attached. The team is currently testing the sensors in a greenhouse. By 2022, they plan to move out to a field. "Building electronics made of zinc and wood and waxes—it's just very weird and cool," Whiting says. The other sensor that was also notable amongst those presented in this year's ARPA-E Summit was a zero-power infrared sensor detecting a plant's thirst. This sensor, developed by Matteo Rinaldi and his team at Northeastern University, shines infrared light on the leaves of a plant. By reading the reflected light, it can tell if the plant is dehydrated or not. Conceptual vision of a smart farm that could employ the Northeastern University zero-power and low-cost sensor nodes in a crop field. Each wireless sensor node is used for non-contact water stress detection in plants.Vaggeeswar Rajaram/Northeastern University The idea to use infrared light stemmed from Rinaldi's previous project: detecting exhaust fumes from cars. By shifting the detection range to match the reflection signal from the moisture of plant leaves instead of the exhaust fumes, the team was able to reinvent their technology for agricultural use. With this shift, "whatever changes in reflectance [are] depending only on water stress of the plant and nothing else," says Antea Risso, a graduate student working on this project. "So, it's quite reliable." One of the biggest drawbacks of the soil sensors is the need for calibration, Risso says. Even within a single farm, there could be many different types of soil present, and calibrating the sensors accordingly could take up a lot of time. The plant sensor would only have to be calibrated by the plant type, which is minimal compared to the soil sensor, Risso says. In addition, reading the moisture level directly from the plant assesses its health more accurately, which is ultimately what the farmers care about. The silicon chip hosts hundreds of plasmonically-enhanced micromechanical photoswitches.RUBY WALLAU/NORTHEASTERN UNIVERSITY The sensor uses energy only when the infrared signal indicates that the plant is dehydrated. When the signal with the right wavelength is absorbed by the nanoplasmonic absorbers designed and engineered by Rinaldi's team, the temperature of the device increases, and that causes the device to bend, turning on the power switch. Because the switch is expected to be on only seldomly, Rinaldi says their sensors will last about 10 years without the need to change batteries. So far, the team has tested their prototypes in the lab environment. By the end of this year, Rinaldi says the team will test a portable prototype in an actual field. At the same time, Rinaldi is already taking steps to commercialize the technology. He and Zhenyun Qian, a research assistant professor at Northeastern, co-founded Zepsor Technologies, aiming to bring the technology into the market. "There are a lot of precision-oriented agriculturists [who] are very, very interested in this," Risso says. "So, we are hoping to test it sooner with their collaboration."

  • Graphene Jolts Sodium-Ion Battery Capacity
    by Prachi Patel on 11. September 2021. at 14:00

    After years of anticipation, sodium-ion batteries are starting to deliver on their promise for energy storage. But so far, their commercialization is limited to large-scale uses such as storing energy on the grid. Sodium-ion batteries just don't have the oomph needed for EVs and laptops. At about 285 Wh/kg, lithium-ion batteries have twice the energy density of sodium, making them more suitable for those portable applications. Researchers now report a new type of graphene electrode that could boost the storage capacity of sodium batteries to rival lithium's. The material can pack nearly as many sodium ions by volume as a conventional graphite electrode does lithium. It opens up a path to making low-cost, compact sodium batteries practical. Abundant and cheap, and with similar chemical properties as lithium, sodium is a promising replacement for lithium in next-generation batteries. The stability and safety of sodium batteries makes them especially promising for electronics and cars, where overheated lithium-ion batteries have sometimes proven hazardous. "But currently the major problem with sodium-ion batteries is that we don't have a suitable anode material," says Jinhua Sun, a researcher in the department of industrial and materials science at Chalmers University of Technology. For the battery to charge quickly and store a lot of energy, ions need to easily slip in and out of the anode material. Sodium-ion batteries use cathodes made of sodium metal oxides, while their anodes are typically carbon-based anodes just like their lithium cousins; although Santa Clara, California-based Natron Energy is making both its anodes and cathodes out of Prussian Blue pigment used in dyes and paints. Some sodium battery developers are using activated carbon for the anode, which holds sodium ions in its pores. "But you need to use high-grade activated carbon, which is very expensive and not easy to produce," Sun says. Graphite, which is the anode material in lithium-ion batteries, is a lower cost option. However, sodium ions do not move efficiently between the stack of graphene sheets that make up graphite. Researchers used to think this was because sodium ions are bigger than lithium ions, but turns out even-bigger potassium ions can move in and out easily in graphite, Sun says. "Now we think it's the surface chemistry of graphene layers and the electronic structure that cannot accommodate sodium ions." He and his colleagues have come up with a new graphite-like material that overcomes these issues. To make it, they grow a single sheet of graphene on copper foil and attach a single layer of benzene molecules to its top surface. They grow many such graphene sheets and stack them to make a layer cake of graphene held apart by benzene molecules. The benzene layer increases the spacing between the layers to allow sodium ions to enter and exit easily. They also create defects on the graphene surface that as as active reaction sites to adsorb the ions. Plus, benzene has chemical groups that bind strongly with sodium ions. This seemingly simple strategy boosts the material's sodium ion-storing capacity drastically. The researchers' calculations show that the capacity matches that of graphite's capacity for lithium. Graphite's capacity for sodium ions is typically about 35 milliAmpere-hours per gram, but the new material can hold over 330 mAh/g, about the same as graphite's lithium-storing capacity.

  • From Engineering Intern to Chairman of Tata
    by Kathy Pretz on 10. September 2021. at 18:00

    Tata Sons There was a time when managing the family farm in India would have been Natarajan "Chandra" Chandrasekaran's path, but his love of computer programming derailed that plan. After returning home from the Coimbatore Institute of Technology with a bachelor's degree in applied sciences, Chandra (as he likes to be called) tried his hand at farming but quickly realized it was not for him. His father—who had given up his own career as a lawyer to run the farm after his father died—encouraged Chandra to continue to pursue his passion for computers. Today the IEEE senior member is chairman of Tata Sons, in Mumbai, India, the holding company for the Tata Group, which encompasses more than 30 businesses. They include chemical plants and consultancy services as well as hotels and steel mills. Chandra chairs the boards of several of the companies including Tata Motors, Tata Power, Tata Consultancy Services (TCS), and Tata Steel. The group employs more than 750,000 people around the world. The Tata Group trading company was launched in 1868 by Jamsetji Tata. Regarded as the "father of Indian industry," Tata had a vision: to create a responsible company that serves the community. Chandra continues to support that mission by helping to fight the COVID-19 pandemic in India and finding ways to use technology to solve societal problems such as access to health care and education. Chandra says the ability for his company to make a difference is the single most important thing to him. "We make an impact on our employees, society, businesses, and—with our huge ecosystem—on the markets in which we operate," he says. He adds that he enjoys working with smart people and "thinking about the future, whether it is about creating our businesses or making contributions to a sustainable world." FROM ENGINEERING INTERN TO MANAGER After graduating in 1986 from Coimbatore, in the state of Tamil Nadu, Chandra returned to run his family's farm in Mohanur, located in the state's Namakkal District. After breaking the news to his father that he would rather be a computer programmer than a farmer, Chandra entered a three-year postgraduate degree program to study computer science and its applications at the state's Regional Engineering College in Tiruchirappalli (now the National Institute of Technology). An internship was required during the last semester. Chandra applied for an opening at TCS, an IT services company, which in 1986 was an up-and-coming firm with about 500 employees. Two months into the internship, the company offered him a job as an engineer after he graduated. He started working for TCS in 1987 and has never left the Tata Group. During his nearly 35 years there, he rose through the ranks, switching from engineering to management in the 1990s. Since 1997 he has held senior-level positions in marketing and sales. From 1998 to 2007 he helped TCS grow its business around the world, including in China, Eastern Europe, and Latin America. In 2009 he was promoted to chief executive. He held that position until 2017, when he was appointed chairman of Tata Sons. "The company gave me a lot of different roles, and as you do better then you get lucky," he says, laughing. "Most of the knowledge I picked up was on the job and by taking on different projects." I believe very strongly that digital-physical integration is the way to solve societal problems He learned management skills from coworkers as well as clients, he says. "TCS not only has the smartest people working for it, but we also work with some of the best companies as clients," he says. "When you work with smart people, you learn. And when you work with demanding clients, you learn. Things rub off on you. My passion has always been to understand deeply what makes a difference to a customer." He says he has always been willing to take on new duties but also never hesitated to ask for help. "TCS has a very supportive culture," he says, "so whenever you have major issues with clients or businesses, you derive support." GIVING BACK With India's under-resourced health care system, Chandra says, he knew 2019's novel coronavirus could have a devastating effect on the country. Since April 2020 the Tata Group, including its philanthropic trusts, has committed more than US $200 million for COVID-related activities. That money has been used in a variety of ways, including building hospitals and increasing the capacity of existing ones by setting up COVID-19 wards and intensive-care units. The oxygen that Tata Steel's mills use to convert iron and scrap metal into steel was diverted for medical use. At one point during the pandemic, Chandra says, the Tata Group provided 10 percent of the medical oxygen required in the country. Once COVID-19 vaccines became available, the group started a massive campaign to inoculate its employees and their families. "Helping is in our DNA," Chandra says of the affiliate companies in the group. "All of our CEOs have a culture of doing good for society." Chandra says he often is asked when business will return to normal after the pandemic. He says it won't. "We are not going back; we are going forward," he says. "While many things about COVID have been negative, there are many positives. COVID has moved the world forward in multiple dimensions. Number one is digital adoption. Number two: Everyone now recognizes the importance of sustainability, because we experienced how much we can dramatically change things, like air quality, in a relatively short period of time—especially in India. "The pandemic has brought to the fore the importance of addressing key global existential risks that we may have treated more theoretically in the past. "Also, the global supply chain cannot be concentrated in any one country. It must be designed for resilience." TECH SOLUTIONS Chandra says artificial intelligence and related technologies can help mankind tackle societal issues such as universal access to health care and a quality education. He outlined his ideas in Bridgital Nation: Solving Technology's People Problem, a 2019 book he coauthored with Roopa Purushothaman. "I believe very strongly that digital-physical integration is the way to solve problems," he says. "Take a country like India—we have a shortage of everything. We have a shortage of doctors, schools, hospitals, and infrastructure. We neither have the time nor the money to be able to build all the capacity we need." For example, about two-thirds of India's citizens live in rural areas, he notes, but most of the doctors are in cities. He says the solution is to use AI, machine learning, the Internet of Things, and cloud computing to create a network of services that can be delivered where they are needed most. That would include telehealth and remote learning for people in rural areas. Poverty could be reduced dramatically, he says, by using AI to increase the capabilities of low-skilled workers so they could perform higher-level jobs. He estimates more than 30 million jobs could be created by 2025. To help make that possible, in 2019 the Tata Group unveiled the Indian Institute of Skills, a joint initiative with the Ministry of Skills Development and the Indian government that provides vocational training. The Tata Group also offers programs that encourage students to pursue STEM careers around the world, and it has launched worldwide adult literacy programs. There are also programs focused on encouraging more women to become entrepreneurs and enter the tech field. Chandra says he is concerned about his employees' well-being. An avid runner, he was the inspiration behind the company's Fit4Life program. It encourages employees to be physically active and give back to their community. "One is for the body, the other one is for the soul," he says. CAREER ADVICE Now is the most exciting time to be an engineer, he says. "There are so many opportunities," he says, "because the pace of change is huge and technology development is huge." He encourages those starting out to "go after what you're passionate about and what excites you. People will live longer, so careers are not going to be over at the age of 60." What's more, he says, "people will probably have two, three, or four careers in their lifetime, so it's a long game. If you're going to work 30, 40, 50 years or even longer, you should enjoy the process." The top skill he says everyone should have is the ability to continue to learn. That's why he renews his IEEE membership, he says. Chandra became a member in 1987 because TCS required its professional employees to join a society. His colleagues recommended IEEE because, they said, he would become more knowledgeable about engineering and cutting-edge technology by reading its publications. "Even reading just one article could go a long way," they told him. He remains a member, he says with a laugh, "because I still have to learn." "It's not about just learning what skills I need," he says. "It is about opening up my mind."

  • Video Friday: Robotic Gaze
    by Evan Ackerman on 10. September 2021. at 17:16

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!): DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – [Online Event] IROS 2021 – September 27-1, 2021 – [Online Event] Robo Boston – October 1-2, 2021 – Boston, MA, USA ROSCon 2021 – October 20-21, 2021 – [Online Event] Let us know if you have suggestions for next week, and enjoy today's videos. Gaze is an extremely powerful and important signal during human-human communication and interaction, conveying intentions and informing about other's decisions. What happens when a robot and a human interact looking at each other? Researchers at the Italian Institute of Technology (IIT) investigated whether a humanoid robot's gaze influences the way people reason in a social decision-making context. [ Science Robotics ] Reachy is here to help you make pancakes, for some value of "help." Mmm, extra crunchy! [ Pollen Robotics ] It's surprising that a physical prototype of this unicorn (?) robot for kids even exists, but there's no way they're going to get it to run. And it's supposed to be rideable, which seems like a fun, terrible idea. [ Xpeng ] via [ Engadget ] Segway's got a new robot mower now, which appears to use GPS (maybe enhanced with a stationary beacon?) to accurately navigate your lawn. [ Segway ] AVITA is a new robotic avatar company founded by Hiroshi Ishiguro. They've raised about $5 million USD in funding to start making Ishiguro's dreams come true, which is money well spent, I'd say. [ Impress ] It's interesting how sophisticated legged robots from Japan often start out with a very obvious "we're only working on the legs" design, where the non-legged part of the robot is an unapologetic box. Asimo and Schaft both had robots like this, and here's another one, a single-leg hopping robot from Toyota Technological Institute. [ TTI ] via [ New Scientist ] Thanks, Fan! How to make a robot walking over an obstacle course more fun: costumes and sound effects! These same researchers have an IROS paper with an untethered version of their robot; you can see it walking at about 10:30 in this presentation video. [ Tsinghua ] Thanks, Fan! Bilateral teleoperation provides humanoid robots with human planning intelligence while enabling the human to feel what the robot feels. It has the potential to transform physically capable humanoid robots into dynamically intelligent ones. However, dynamic bilateral locomotion teleoperation remains as a challenge due to the complex dynamics it involves. This work presents our initial step to tackle this challenge via the concept of wheeled humanoid robot locomotion teleoperation by body tilt. [ RoboDesign Lab ] This is an innovative design for a powered exoskeleton of sorts that can move on wheels but transform into legged mode to be able to climb stairs. [ Atoun ] Thanks, Fan! I still have no idea why the Telexistence robot looks the way it does, but I love it. [ Telexistence ] In this video, we go over how SLAMcore's standard SDK can be integrated with the ROS1 Navigation Stack, enabling autonomous navigation of a kobuki robot with an Intel RealSense D435i depth camera. [ SLAMcore ] Thanks, Fan! Normally, I wouldn't recommend a two hour long video with just talking heads. But when one of those talking heads is Rod Brooks, you know that the entire two hours will be worth it. [ Lex Fridman ]

  • Exosuit That Helps With the Heavy Lifting
    by Payal Dhar on 10. September 2021. at 14:44

    New advances in robotics can help push the limits of the human body to make us faster or stronger. But now researchers from the Biorobotics Laboratory at Seoul National University (SNU) have designed an exosuit that corrects body posture. Their recent paper describes the Movement Reshaping (MR) Exosuit, which, rather than augmenting any part of the human body, couples the motion of one joint to lock or unlock the motion of another joint. It works passively, without any motors or batteries. For instance, when attempting to lift a heavy object off the floor, most of us stoop from the waist, which is an injury-inviting posture. The SNU device hinders the stooping posture and helps correct it to a (safer) squatting one. "We call our methodology 'body-powered variable impedance'," says, Kyu-Jin Cho, a biorobotics engineer and one of the authors, "[as] we can change the impedance of a joint by moving another." Most lift-assist devices—such as Karl Zelik's HeroWear—are designed to reduce the wearer's fatigue by providing extra power and minimizing interference in their volitional movements, says co-author Jooeun Ahn. "On the other hand, our MR Exosuit is focusing on reshaping the wearer's lifting motion into a safe squatting form, as well as providing extra assistive force." Movement reshaping exo-suit for safe lifting The MR suit has been designed to mitigate injuries for workers in factories and warehouses who undertake repetitive lifting work. "Many lift-related injuries are caused not only by muscle fatigue but also by improper lifting posture," adds Keewon Kim, a rehabilitation medicine specialist at SNU College of Medicine, who also contributed to the study. Stooping is easier than squatting, and humans tend to choose the more comfortable strategy. "Because the deleterious effects of such comfortable but unsafe motion develop slowly, people do not perceive the risk in time, as in the case of disk degeneration." The researchers designed a mechanism to lock the hip flexion when a person tries to stoop and unlock it when they tried to squat. "We connected the top of the back to the foot with a unique tendon structure consisting of vertical brake cables and a horizontal rubber band," graduate researcher and first author of the study, Sung-Sik Yoon, explains. "When the hip is flexed while the knee is not flexed, the hip flexion torque is delivered to the foot through the brake cable, causing strong resistance to the movement. However, if the knees are widened laterally for squatting, the angle of the tendons changes, and the hip flexion torque is switched to be supported by the rubber band." The device was tested on ten human participants, who were first-time users of the suit. Nine out of ten participants changed their motion pattern closer to the squatting form while wearing the exosuit. This, says Ahn, is a 35% improvement in the average postural index of 10 participants. They also noticed a 5.3% reduction in the average metabolic energy consumption of the participants. "We are now working on improving the MR Exosuit in order to test it in a real manual working place," Ahn adds. "We are going to start a field test soon." “Wearable devices do not have to mimic the original anatomical structure of humans." The researchers plan to commercialize the device next year, but there are still some kinks to work out. While the effectiveness of the suit has been verified in their paper, the long-term effects of wearing have not. "In the future, we plan to conduct a longitudinal experiment in various fields that require lift posture training such as industrial settings, gyms, and rehabilitation centers," says Cho. They are also planning a follow-up study to expand the principle of body-powered variable impedance to sports applications. "Many sports that utilize the whole body, such as golf, swimming, and running, require proper movement training to improve safety and performance," Cho continues. "As in this study, we will develop sportswear for motion training suitable for various sports activities using soft materials such as cables and rubber bands." This study shows that artificial tendons whose structure is different from that of humans can effectively assist humans by reshaping the motor pattern, says Ahn. The current version of the exosuit can also be used to prevent inappropriate lifting motions of patients with poor spinal conditions. He and his colleagues expect that their design will lead to changes in future research on wearable robotics: "We demonstrated that wearable devices do not have to mimic the original anatomical structure of humans."

  • Brain-Inspired AI Will Enable Future Medical Implants
    by Rebecca Sohn on 10. September 2021. at 13:00

    Artificial intelligence can identify subtle patterns in data, which is particularly useful in medicine. So far, these have been offline processes—doctors perform a medical test, and data from the test is run through a software program after. A real-time process could allow doctors to identify and treat a medical problem much more quickly. One way to detect these patterns in real time would be with an AI system implanted in the body. In a new study led by researchers from TU Dresden, researchers created a system made from networks of tiny polymer fibers that, when submerged in a solution meant to replicate the inside of the human body, function as organic transistors. These networks can detect and classify abnormal electrical signals in the body. To test their system, the researchers used it to identify patterns in types of irregular heartbeats. Technology like this could be used to detect medical concerns like irregular heartbeats and others, such as high blood sugar. "What we have demonstrated is a general concept," said Matteo Cucchi, PhD student at TU Dresden and the study's lead author. "It's a general approach that then can be specialized for one particular application." Neuron-like transistors To create biocompatible hardware, Cucchi and his colleagues used networks of polymer fibers made out of a carbon-based material called PEDOT. The tiny networks of branching fibers are visible with a microscope. Cucchi and fellow researchers led by Karl Leo, senior author of the study and director of the Dresden Integrated Center for Applied Physics and Photonic Material, where this research took place, were struck by how similar they looked to neurons. When immersed in an electrolyte (a salt solution) that mimics conditions inside the human body, the networks of fibers become organic electrochemical transistors (OECTs), which, like silicon-based transistors in traditional computers, act as switches for electrical current, though using a different mechanism. In a traditional silicon transistor, a metal contact controls whether the transistor is on or off. An OECT "works very differently because you contact the channel with the electrolyte, and you change the potential of the electrolyte," said Leo. "In this way, you can control the number of ions which are in the polymer [fibers] or the electrolyte. And that is changing the conductivity." These organic transistors transform electrical inputs into nonlinear signals, like the binary code that computers use, making it usable for computation. The researchers used an approach to machine learning called reservoir computing for their system. Unlike the highly-structured organization of other machine learning systems, the components are configured randomly to form a reservoir. In the study, the OECTs were random because of the way they were made. The researchers used a method called AC electropolymerization, which involves running alternating current between electrodes across a liquid precursor to PEDOT. Material starts to condense on one electrode and a fiber eventually grows to the other. The process produces fibers with varying resistances and response times, which help transform the electrical inputs into nonlinear outputs. The researchers input data in the form of electrical signals, replicating the type of ionic information that the system would receive if it were inside the body. The system worked best when the data was encoded into electrical frequencies. The signals are changed and transformed by the "black box" of the reservoir. Then, the researchers could train a smart system to interpret the results as one of several electrical patterns. In this way, the reservoir is trained to recognize and classify patterns of electrochemical information. Testing the system One of the datasets the researchers tested their system on was data that represented four types of heartbeats—one normal and three irregular. The AI could correctly distinguish between the four types of heartbeats 88% of the time. Importantly, the heartbeat data was part of an already existing dataset and was not collected from any people as part of the study. In the future, implantable devices using more specialized versions of this technology might be able to detect unusual electrical signals and medical concerns from within a person's body. The researchers write that this could be particularly useful after surgery. Leo imagines a device with a simple light display that would stay green if a heartbeat were normal and turn red if it became irregular. Cucchi said that such a device could enable doctors to "act immediately on the signal without losing time and money on analysis and invasive procedures." For now, the researchers said, the technology is nowhere near being used inside a person. The study only examined how a system like this could work. Use of it as a medical implant would require extensive preclinical and clinical testing. The hardware in the study also used outside power and had no internal power source, as an implant would. The technology also raises questions about the implications of implanting an AI device in a person's body. The authors suggest that this system could be used online, or in real-time, which would raise questions about how that data is presented and collected. Leo says these are important questions to consider alongside future research. "There is an ongoing discussion about AI and how you apply it, and its potential for misuse," said Leo. "It's definitely an issue here."

  • This Memory Tech Is Better When It's Bendy
    by Samuel K. Moore on 9. September 2021. at 18:28

    For stick-on displays, smart bandages, and cheap flexible plastic sensors to really take off, they'll need some way of storing data for the long-term that can be built on plastic. "In the ecosystem of flexible electronics, having memory options is very important," says Stanford electrical engineering professor Eric Pop. But versions of today's non-volatile memories, such as Flash, aren't a great fit. So when Pop and his team of engineers decided to try adapting a type of phase-change memory to plastic, they figured it would be a long shot. What they came up with was a surprise—a memory that actually works better because it's built on plastic. The energy needed to reset the memory, a critical feature for this type of device, is an order of magnitude lower than previous flexible versions. They reported their findings this week in Science. Phase-change memory (PCM) is not an obvious win for plastic electronics. It stores its bit as a resistive state. In its crystalline phase, it has a low resistance. But running enough current through the device melts the crystal, allowing it to then freeze in an amorphous phase that is more resistive. The process is reversible. Importantly, especially for experimental neuromorphic systems, PCM can store intermediate levels of resistance. So a single device can store more than one bit of data. Unfortunately, the usual set of materials involved doesn't work well on flexible substrates like plastic. The problem is "programming current density": Basically, how much current do you need to pump through a given area in order to heat it up to the temperature at which the phase change takes place? The uneven surface of bendy plastic means PCM cells using the usual materials can't be made as small as they are on silicon, requiring more current to achieve the same switching temperature. Think of it as trying to bake a pie in an oven with the door slightly ajar. It will work, but it takes a lot more time and energy. Pop and his colleagues were looking for a way to close the oven door. They decided to try a material called a superlattice, crystals made from repeating nanometers-thick layers of different materials. Junji Tominaga and researchers at the National Institute of Applied Industrial Science and Technology in Tsukuba, Japan had reported promising results back in 2011 using a superlattice composed of germanium, antimony, and tellurium. Studying these superlattices, Pop and his colleagues concluded that they should be very thermally insulating, because in its crystalline form there are atomic-scale gaps between the layers. These "van der Waals-like gaps" restrict both the flow of current and, crucially, heat. So when current is forced through, the heat doesn't quickly drain away from the superlattice, and that means it takes less energy to switch from one phase to another. Current is confined to a superlattice by a pore-like structure of aluminum oxide. This makes heating more efficient so the memory can switch states using less energy.A.I. Khan and A. Daus But the superlattice work was hardly a slam dunk. "We started working on it several years ago, but we really struggled and almost gave up," says Pop. The superlattice works if the van der Waals gaps are oriented parallel to each other and without major mixing between layers, Pop explains. But the peculiarities of the material deposition equipment involved mean that "just because they published their parameters in Japan, doesn't mean you can use them in a tool in Palo Alto." Asir Intisar Khan, a doctoral candidate working with Pop, had to push through a trial-and-error process that involved more than 100 attempts to produce superlattices with the right van der Waals gaps. A superlattice structure formed by alternating layers of antimony telluride and germanium telluride. Van der Waals-like gaps form between the layers, restricting the flow of current and heat.K. Yamamoto and A.I. Khan The researchers kept the heat in the memory device by confining the flow of current to a 600-nanometer-wide pore-like structure that was surrounded by insulating aluminum oxide. The final layer of insulation was the plastic itself, which resists the flow of heat considerably better than the silicon PCM is usually built on. The completed device had a current density of about 0.1 mega-amperes per square centimeter, about two orders of magnitude lower than conventional PCM on silicon and an order of magnitude better than previous flexible devices. Furthermore, it showed four-stable resistance states. So it can store multiple bits of data in a single device. That building the device on plastic would actually improve things wasn't something the team had planned. Alwin Daus, a post-doctoral researcher in the lab with flexible electronics expertise, says the team assumed that the titanium nitride electrode between the superlattice and the substrate would limit heat loss and thus the substrate would not influence the memory operation. But later simulations confirmed that heat penetrates into the plastic substrate, which has a low thermal conductivity compared to silicon substrates. The work reported this week is a proof of concept for low-power storage on flexible surfaces, Khan says. But the importance of thermal insulation applies to silicon devices as well. The team hopes to improve the devices by further shrinking the pore diameter and by making the sides of the device more insulating. Simulations already show that making the aluminum oxide walls thicker reduces the current needed to reach the switching temperature.The researchers will also look into other superlattice structures that might have even better properties.