IEEE News

IEEE Spectrum IEEE Spectrum

  • Video Friday: ReachBot
    by Evan Ackerman on 3. February 2023. at 17:23

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. Robotics Summit & Expo: 10–11 May 2023, BOSTON ICRA 2023: 29 May–2 June 2023, LONDON RoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCE RSS 2023: 10–14 July 2023, DAEGU, KOREA IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREA CLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZIL Enjoy today’s videos! ReachBot is a new concept for planetary exploration, consisting of a small body and long, lightweight extending arms loaded primarily in tension. The arms are equipped with spined grippers for anchoring on rock surfaces. Experiments with rock grasping and coordinated locomotion illustrate the advantages of low-inertia passive grippers, triggered by impact and using stored mechanical energy for the internal force. [ Paper ] DHL Supply Chain is deploying Stretch to automate trailer unloading and support warehouse associates. In the past 8 to 10 years there have been tremendous advancements in warehouse automation. DHL has been a leader in deploying automation technology to improve efficiency, drive cost-effectiveness, and support exceptional employee experiences. Discover how they are putting Stretch to work. [ Boston Dynamics ] Scientists at the University of Bristol have drawn on the design and life of a mysterious zooplankton to develop underwater robots. These robotic units, named RoboSalps after their animal namesakes, have been engineered to operate in unknown and extreme environments, such as extraterrestrial oceans. RoboSalps are unique, as each individual module can swim on its own. This is possible because of a small motor with rotor blades—typically used for drones—that is inserted into the soft tubular structure. When swimming on their own, RoboSalps modules are difficult to control, but after joining them together to form colonies, they become more stable and show sophisticated movements. [ Bristol ] AIce is an autonomous Zamboni convoy that is designed to automate ice resurfacing in any ice rink. The current goal of this product is to demonstrate an autonomous driving task based on the leader-follower method, utilizing computer vision, motion planning, control, and localization. The team aspires to build this project in a manner that will give it potential to grow after the project is completed to a fully autonomous Zamboni. [ AIce ] via [ CMU ] We propose a new neck design for legged robots to achieve robust visual-inertial state estimation in dynamic locomotion. While visual-inertial state estimation is widely used in robotics, it can be disturbed by the impacts and vibration generated when legged robots move dynamically. To address this problem, we develop a tunable neck system that absorbs the impacts and vibration during diverse gait locomotions. [ Paper ] I will not make any comments about meat-handling robots. [ Soft Robotics ] This should be pretty cool to see once it’s running on hardware. [ Paper ] A largely untapped potential for aerial robots is to capture airborne targets in flight. We present an approach in which a simple dynamic model of a quadrotor/target interaction leads to the design of a gripper and associated velocity sufficiency region with a high probability of capture. We demonstrate in-flight experiments that a 550-g drone can capture an 85-g target at various relative velocities between 1 meter per second and 2.7 m/s. [ Paper ] The process of bin picking presents new challenges again and again. In order to be able to deal with small and flat component geometries as well as with entanglements and packaging material, methods of machine learning are used at Fraunhofer IPA. In addition to increasing the robustness of the removal process, attempts are also made to minimize the process time and the commissioning effort. [ Fraunhofer ] The history of lidar: After the devastating loss of Mars Observer, the Goddard team mourns and regroups to build a second MOLA [ Mars Orbiter Laser Altimeter] instrument for the Mars Global Surveyor mission. But before their laser altimeter goes to Mars, the team seizes an opportunity to test it on the space shuttle. [ NASA ] [Leaders in Lidar, Chapter 1] What are the challenges in the development of humanoid robotic systems? What are the advantages and what are the criticalities? Bruno Siciliano, coordinator of PRISMA Lab, discusses these themes with Fabio Puglia, president and cofounder of Oversonic Robotics. Moderated by science journalist Riccardo Oldani, Siciliano and Puglia also bring concrete cases of the development of two humanoid robots, Rodyman and RoBee, respectively, and their applications. [ PRISMA Lab ] Please join us for a lively panel discussion featuring GRASP Faculty members including Dr. Nadia Figueroa, Dr. Dinesh Jayaraman, and Dr. Marc Miskin. This panel will be moderated by Penn Engineering SEAS Dean Dr. Vijay Kumar. [ UPenn ] An interactive webinar discussing how progress in robotic materials is impacting the field of manipulation. The second conversation in the series, hosted by Northwestern’s Center for Robotics and Biosystems. Moderator: Carmel Majidi, Carnegie Mellon University. Panelists: Elliot W. Hawkes, UC Santa Barbara; Tess Hellebrekers, Meta AI; Nancy Pollard, Carnegie Mellon University; Yon Visell, UC Santa Barbara. [ Northwestern ] At the 2022 Conference on Robot Learning (CoRL), Waymo’s head of research, Drago Anguelov, shared some of his team’s recent research on improving models for behavior. [ Waymo ] This week’s CMU RI Seminar, “Motion Planning Around Obstacles with Graphs of Convex Sets,” is from Russ Tedrake. [ CMU ]

  • Evolution of In-Vehicle Networks to Zonal Architecture
    by Rohde & Schwarz on 3. February 2023. at 15:12

    As the automotive industry develops vehicles with increased levels of autonomous driving requiring more sensors and increased connectivity, it faces the challenge of transporting and processing a huge amount of data in the vehicle. To do this in an efficient way, it is necessary to reduce In-Vehicle Network complexity, power consumption and weight, leading to a change from domain-orientated network architecture to zonal architecture. Join this webinar to learn about this significant development in In-Vehicle Networks and how to test it effectively. In this webinar, you will learn more about: Evolution of In-Vehicle Network architecture Automotive Ethernet characteristics Compliance testing of Ethernet Practical demonstration Register now to attend this free webinar! Presenters: RALF OESTREICHERAutomotive Market Segment Manager, Rohde & SchwarzRalf Oestreicher is an Automotive Market Segment Manager at Rohde & Schwarz where he is focussed on bringing together product strategy, business development and marketing to ensure the company’s e-Mobility test solutions meet customer requirements. Prior to joining R&S, Ralf was Global Key Account Manager at Isabellenhuette Heusler gaining in-depth knowledge of electric vehicle testing. JITHU ABRAHAMProduct Manager, Oscilloscopes and Probes, Rohde & SchwarzJithu Abraham works for Rohde & Schwarz as a product manager for the UK, Ireland and the Benelux region, specializing in oscilloscopes. He enjoys all aspects of high-speed digital, wireless communication, efficient power conversion and all the challenges they bring. Jithu holds an engineering degree in electronics and communication from the Anna University in India and a master’s degree in RF systems from the University of Southampton. He has been working for Rohde & Schwarz for over 12 years.

  • Laying the Foundation for Extended Reality
    by Srikanth Chandrasekaran on 2. February 2023. at 19:00

    Some observers say the metaverse is an expanded set of digital worlds that will grow out of the online environments that people are already familiar with, such as enhancing the extended-reality (XR) experience used in online gaming. The world they imagine is expected to offer new features and capabilities that accelerate society’s digital transformation and enhance sustainability by reducing the need for people to travel to meetings and perform resource-intensive activities. Others say the metaverse will usher in a decentralized ecosystem that empowers users to create digital assets of their own choosing and engage in digital commerce. Because the architecture would be open, decentralized, and without gatekeepers, this version is expected to democratize the Internet by making it transparent, accessible, and interoperable to everyone. However the metaverse evolves, one thing is certain: It has tremendous potential to fundamentally transform the ways we work, learn, play, and live. But there will be issues to deal with along the way. That is why the IEEE Standards Association (IEEE SA) is working to help define, develop, and deploy the technologies, applications, and governance practices needed to help turn metaverse concepts into practical realities, and to drive new markets. Technical and societal challenges The technical and societal challenges that come with designing and building metaverse environments include: Better user interfaces. Lower system latency. More tightly integrated, interoperable XR technologies. Better 3D modeling and volumetric video rendering. Improved ways to acquire, render, store, and protect geospatial data. Lower power consumption. Interacting with the Internet. Consensus is needed to address the wide variety of views held on technosocial issues such as user identity, credentialing, privacy, openness, ethics, accessibility, and user safety. New technical standards IEEE SA recently formed its metaverse standards committee, the first committee of a major worldwide standards development organization designed to advance metaverse-related technologies and applications. It will do so by developing and maintaining technical standards, creating recommended practices, and writing guides. In addition, technical standards and activities are incubating new ideas on topics that are expected to be of great interest to industry. The IEEE P2048 Standard for Metaverse: Terminology, Definitions, and Taxonomy, for example, is designed to define the vocabulary, categories, and levels of a metaverse to establish a common ground for ongoing discussions, facilitate the sustainable development of metaverse-related activities, and promote the healthy growth of metaverse markets. The IEEE P7016 Standard for Ethically Aligned Design and Operation of Metaverse Systems will provide a high-level overview of the technosocial aspects of metaverse systems and specify an ethical assessment methodology for use in their design and operation. The standard will include guidance to developers on how to adapt their processes to prioritize ethically aligned design. In addition, IEEE P7016 will help define ethical system content on accessibility and functional safety. Also included will be guidance on how to promote ethically aligned values and robust public engagement in the research, implementation, and proliferation of metaverse systems to increase human well-being and environmental sustainability. Two industry-focused initiatives IEEE SA also recently launched two Industry Connections activities specifically for the metaverse. The IC program facilitates collaboration and consensus-building among participants. It also provides IEEE resources to help produce standards proposals; white papers and other reports; events; software tools; and Web services. The Decentralized Metaverse Initiative has identified a goal of developing and providing guidelines for implementing decentralized metaverses, which not only could capitalize on intellectual property and virtual assets in decentralized ways but also could benefit from other potential features of decentralized architectures. The Persistent Computing for Metaverse Initiative will focus on the technologies needed to build, operate, and upgrade metaverse experiences. It includes computation, storage, communications, data structures, and artificial intelligence. This group will facilitate discussions and collaborations on persistent computing, steer and give advice on research and development, and provide technical guidelines and references. Webinars with experts The IEEE Metaverse Congress offers a series of webinars that provide a comprehensive, global view from experts who are involved with the technology’s development, design, and governance. Join the Metaverse Community to help develop this new area, advance your organization’s viewpoint, and engage with others. This article is an edited excerpt of the “Why Are Standards Important for the Metaverse?” blog entry, published in November 2022.

  • Autonomous Drive Emulation: Applying C-V2X Test Solutions Across the Automotive Workflow
    by Keysight on 1. February 2023. at 13:00

    Achieving fully autonomous driving relies on vehicle-to-everything (V2X) communication between the surrounding infrastructure and in-vehicle-based sensors. The functionality and safety of systems that incorporate V2X must be verified across a variety of situations and conditions. As the breadth and depth of such testing increases, it quickly becomes too expensive, impractical, and risky to use actual vehicles on closed or public roads. Verification will increasingly depend on detailed simulation and testing in the lab. Download this free whitepaper now!

  • IEEE Medal of Honor Goes to Vint Cerf
    by Joanna Goodrich on 31. January 2023. at 19:00

    IEEE Life Fellow Vinton “Vint” Cerf, widely known as the “Father of the Internet,” is the recipient of the 2023 IEEE Medal of Honor. He is being recognized “for co-creating the Internet architecture and providing sustained leadership in its phenomenal growth in becoming society’s critical infrastructure.” The IEEE Foundation sponsors the annual award. While working as a program manager at the U.S. Defense Advanced Research Projects Agency (DARPA) Information Processing Techniques Office in 1974, Cerf and IEEE Life Fellow Robert Kahn designed the Transmission Control Protocol and the Internet Protocol. TCP manages data packets sent over the Internet, making sure they don’t get lost, are received in the proper order, and are reassembled at their destination correctly. IP manages the addressing and forwarding of data to and from its proper destinations. Together they make up the Internet’s core architecture and enable computers to connect and exchange traffic. “Cerf’s tireless commitment to the Internet’s evolution, improvement, oversight, and evangelism throughout its history has made an indelible impact on the world,” said one of the endorsers of the award. “It is largely due to his efforts that we even have the Internet, which has changed the way society lives. “The Internet has enabled a large part of the world to receive instant access to news, brought us closer to friends and loved ones, and made it easier to purchase products online,” the endorser said. “It’s improved access to education and scientific discourse, made smartphones useful, and opened the door for social media, cloud computing, video conferencing, and streaming. Cerf also saw early on the importance of decentralized control, with no one company or government completely in charge.” Since 2005, Cerf has been vice president and chief Internet evangelist at Google in Reston, Va., spreading the word about adopting the Internet in service to public good. He is responsible for identifying new technologies and enabling policies that support the development of advanced, Internet-based products and services. Enhancing the World Wide Web Cerf left DARPA in 1982 to join Microwave Communications Inc. (now part of WorldCom), headquartered in Washington, D.C., as vice president of its digital information services division. A year later, he led the development of MCI Mail, the first commercial email service on the Internet. In 1986 he left the company to become vice president of the newly formed Corporation for National Research Initiatives, also in Reston. He worked alongside Kahn at the not-for-profit organization, developing digital libraries, gigabit speed networks, and knowledge robots (mobile software agents used in computer networks). He returned to MCI in 1994 and served as a senior vice president for 11 years before joining Google. “It is largely due to Cerf’s efforts that we even have the Internet.” Together with Kahn, Cerf founded the nonprofit Internet Society in 1992. The organization helps set technical standards, develops Internet infrastructure, and helps lawmakers set policy. Cerf served as its president from 1992 to 1995 and was chairman of the board of the Internet Corp. for Assigned Names and Numbers from 2000 to 2007. ICANN works to ensure a stable, secure, and interoperable Internet by managing the assignment of unique IP addresses and domain names. It also maintains tables of registered parameters needed for the protocol standards developed by the Internet Engineering Task Force.Cerf has received several recognitions for his work, including the 2004 Turing Award from the Association for Computing Machinery. The honor is known as the Nobel Prize of computing. Together with Kahn, he was awarded a 2013 Queen Elizabeth Prize for Engineering, a 2005 U.S. Presidential Medal of Freedom, and a 1997 U.S. National Medal of Technology and Innovation.

  • Transparency Depends on Digital Breadcrumbs
    by Harry Goldstein on 31. January 2023. at 16:00

    From the moment we wake up and reach for our smartphones and throughout the day as we text each other, upload selfies to social media, shop, commute, work, work out, watch streaming media, pay bills, and travel, and even while we’re sleeping, we spew personal data like jets sketching contrails across the sky. An astonishing amount of that data is recorded, stored, analyzed, and shared by media companies looking to pitch you content and ads, retailers aiming to sell you more of what you’ve already bought, and potential distant relatives hitting you up on genealogy sites. And sometimes, if you’re suspected of participating in illegal activities, that data can bring you under the scrutiny of law enforcement officials. Contributing Editor Mark Harris spent months poring over court documents and other records to understand how the U.S. Federal Bureau of Investigation and other agencies exploited vast troves of data to conduct the largest criminal investigation in U.S. history: into the violent overtaking of the Capitol building on 6 January 2021. The events of that day unfolded on live television watched by millions. But in order for investigators to identify suspects amidst a mob of thousands, it had to cast a very wide net and sought the cooperation of tech giants like Google, Facebook, and Snap and carriers like Verizon and T-Mobile. As Harris painstakingly documents in “How the Police Exploited the Capitol Rioters’ Digital Records,” some of the information the FBI used was intentionally shared by rioters on social media, while other information was gleaned from the kind of data we all heedlessly cast off during the course of the day, like the order for pizza that landed one group of rioters in hot water or the automated license-plate readings that were cited in 20 cases. “In the eternal struggle between security and privacy, the best that digital-rights activists can hope for is to watch the investigators as closely as they are watching us.” —Mark Harris The ability to ingest multiple data streams and analyze them to trace rioters’ journeys to, through, and back from the Capitol has led to 950 arrests, with more than half leading to guilty pleas and 40 to guilty verdicts as of this writing. But as the privacy advocates Harris interviewed point out, while these tools helped law enforcement hold some people accountable for their actions that day, those same tools can be used by the state against law-abiding citizens, not just in the United States, but anywhere. And the data we make available (knowingly or not), often for the sake of convenience or as the price of admission, leaves us vulnerable to bad actors, be they governments, corporations, or individuals. The writer David Brin foretold a version of our current panopticon in his 1998 book The Transparent Society. In it, he acknowledges the risks of surveillance technology but contends that the very ubiquity of that technology is in itself a safeguard against abuse by giving everyone the ability to shine a light on the dark corners of individual and institutional behavior: His stance jibes with Harris’s final observation: “In the eternal struggle between security and privacy, the best that digital-rights activists can hope for is to watch the investigators as closely as they are watching us.” But as Brin points out, watching the watchers isn’t enough to guarantee a free and open society. As data-driven prosecutions for the 6 January insurrection continue, it’s worth considering the linkage Brin makes between liberty and accountability, which he says, “is the one fundamental ingredient on which liberty thrives. Without the accountability that derives from openness—enforceable upon even the mightiest individuals and institutions—how can freedom survive?”

  • RCA’s Lucite Phantom Teleceiver Introduced the Idea of TV
    by Allison Marsh on 30. January 2023. at 16:30

    On 20 April 1939, David Sarnoff, president of the Radio Corporation of America, addressed a small crowd outside the RCA pavilion at the New York World’s Fair. “Today we are on the eve of launching a new industry, based on imagination, on scientific research and accomplishment,” he proclaimed. That industry was television. RCA president David Sarnoff’s speech at the 1939 World’s Fair was broadcast live. www.youtube.com Sarnoff’s speech was unusual at that time for the United States simply because it was the first time a news event was broadcast live for television. Although television technology had been in development for decades, and the BBC had been airing live programs since 1929 in the United Kingdom, competing technologies and licensing disputes kept the U.S. television market from taking off. With the World’s Fair and its theme of the World of Tomorrow, Sarnoff aimed to change that. Ten days after Sarnoff’s speech, the National Broadcasting Corporation (NBC), a fully owned subsidiary of RCA, began a regular slate of television programming, beginning with President Franklin Delano Roosevelt’s speech officially opening the fair. RCA’s Phantom Teleceiver was the TV of tomorrow The architecture of RCA’s pavilion at the fair was a nod to the company’s history. Designed by Skidmore & Owens, it was shaped like a radio vacuum tube. But the inside held a vision of the future. Entering the pavilion, fairgoers encountered the Phantom Teleceiver, RCA’s latest technological wonder. This special model of the TRK-12 television receiver, which today we would call a television set or simply a TV, was housed in a cabinet constructed from DuPont’s new clear plastic, Lucite. The transparent case allowed visitors to inspect the inner workings from all sides. An unusual aspect of the TRK-12 was its vertically positioned cathode-ray tube, which projected the image upward onto a 30.5-centimeter (12-inch) mirror on the underside of the cabinet lid. Industrial designer John Vassos, who was responsible for creating the shape of RCA’s televisions, found the size of that era’s tubes to be a unique challenge. Had the CRT been positioned horizontally, the television cabinet would have pushed out almost a meter into the room. As it was, the set was a heavyweight, standing 102 cm tall and weighing more than 91 kilograms.The image in the mirror was the reverse of that projected by the CRT, but Vassos must have decided it wasn’t a deal breaker. According to art historian Danielle Shapiro, the author of John Vassos: Industrial Designer for Modern Life, Vassos drew on the modernist principles of streamlining to design the cabinetry for the TRK-12. In addition to contending with the size of the tube, he had to find a way to dissipate its extreme heat. He chose to integrate vents throughout the cabinet, creating a louver as a design motif. Production sets (meaning all the ones not made out of Lucite for the fair) were crafted from different shades and patterns of walnut with stripes of walnut veneer, so the overall look was of an elegant wooden box. The Lucite-encased TRK-12 was introduced at the 1939 World’s Fair.RCA (If you want to see the original World’s Fair TV, it now resides at the MZTV Museum of Television, in Toronto. A clever replica, built by the Early Television Museum with an LCD screen instead of a vintage cathode-ray tube, is at the ACMI in Melbourne, Australia.) The TRK-12 wasn’t just a TV. It was the first multimedia center. The cabinet housed the television as well as a three-band, all-wave radio and a Victrola switch to attach an optional phonograph, the sound from which would play through the radio speaker. A fidelity selector knob allowed users to switch easily among the different entertainment options, and a single knob controlled the power and volume for all settings. On the left-hand side of the console were two radio knobs (range selector and tuning control), and on the right were three dual-control knobs for the television (vertical and horizontal hold; station selection and fine tuning; and contrast and brightness). In 1939, TV was still so novel that the owner’s manual for the TRK-12 devoted a section to explaining “How You Receive Television Pictures.” Although the home user could select any of five different television stations and fiddle with the picture quality, a bold-faced warning in the owner’s manual cautioned that only a competent television technician should install the receiver because it had the ability to produce high voltages and electrical shocks. TV was then so novel that the manual devoted a section to explaining “How You Receive Television Pictures”: “Television reception follows the laws governing high frequency wave transmission and reception. Television waves act in many respects like light waves.” So long as you knew how light waves behaved, you were good. In addition to designing the television sets for the fair, Vassos created two exhibits to help new users envision how these machines could fit into their homes. When David Sarnoff gave his dedication speech, for example, only a few hundred people were able to watch it live simply because so few people owned TV sets. Shapiro argues that Vassos was one of the earliest modern designers to focus on the user experience and try to alleviate the anxiety and frenzy caused by the urban environment. His design for the Radio Living Room of Today blended the latest RCA technology, including a facsimile machine, with contemporary furnishings. In 1940, Vassos added the Radio Living Room of Tomorrow. This exhibit, dubbed the Musicorner, included dimmable fluorescent lights to allow for ideal television-watching conditions. Foreshadowing cassette recorders and CD burners was a device for recording and producing phonographs. Tasteful modular cabinets concealed the television and radio receivers, not unlike some style trends today. RCA designer John Vassos’s stylish Musicorner room incorporated cutting-edge technology for watching TV and recording phonographs. Archives of American Art Each day, thousands of visitors to the RCA pavilion encountered television, often for the first time, and watched programming on 13 TRK-12 receivers. But if television really was going to be the future, RCA had to convince consumers to buy sets. Throughout the fair’s 18-month run, the company arranged to have four models of television receivers, all designed by Vassos, available for sale at various department stores in the New York metropolitan region. The smallest of these was the TT-5 tabletop television, which only provided a picture. It plugged into an existing radio to receive sound. The TT-5 was considered the “everyman’s version” and had a starting price of $199 ($4,300 today). Next biggest was the TRK-5, then the TRK-9, and finally the TRK-12, which sold for $600 (nearly $13,000 today). Considering that the list price of a modest new automobile in 1939 was $700 and the average annual income was $1,368, even the everyman’s television remained beyond the reach of most families. Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the February 2023 print issue as “Yesterday’s TV of Tomorrow.”

  • Evolution and Impact of Wi-Fi Technology
    by Anritsu on 30. January 2023. at 13:00

    The importance of Wi-Fi as the most popular carrier of wireless IP traffic in the emerging era of IoT and 5G continues its legacy, and Wi-Fi standards are continually evolving to help implement next generation applications in the cyberspace for positioning, human computer interfacing, motion, and gesture detection, as well as authentication and security. Wi-Fi is a go-to connectivity technology that is stable, proven, easy to deploy, and desirable by a host of diversified vertical markets, including Medical, Public Safety, Offender Tracking, Industrial, PDA, Security (monitoring), and others. Register now for this free webinar! This webinar will cover: The evolution of the Wi-Fi technology, standards, and applications Key aspects/features of Wi-Fi 6 and Wi-Fi 7 Wi-Fi market trends and outlook Solutions that will help build robust Wi-Fi 6 and Wi-Fi 7 networks This lecture is based on his recent paper: Pahlavan, K. and Krishnamurthy, P., 2021. Evolution and impact of Wi-Fi technology and applications: a historical perspective. International Journal of Wireless Information Networks, 28(1), pp.3-19.

  • Roboticists Want to Give You a Third Arm
    by Carsten Mehring on 29. January 2023. at 16:00

    What could you do with an extra limb? Consider a surgeon performing a delicate operation, one that needs her expertise and steady hands—all three of them. As her two biological hands manipulate surgical instruments, a third robotic limb that’s attached to her torso plays a supporting role. Or picture a construction worker who is thankful for his extra robotic hand as it braces the heavy beam he’s fastening into place with his other two hands. Imagine wearing an exoskeleton that would let you handle multiple objects simultaneously, like Spiderman’s Dr. Octopus. Or contemplate the out-there music a composer could write for a pianist who has 12 fingers to spread across the keyboard. Such scenarios may seem like science fiction, but recent progress in robotics and neuroscience makes extra robotic limbs conceivable with today’s technology. Our research groups at Imperial College London and the University of Freiburg, in Germany, together with partners in the European project NIMA, are now working to figure out whether such augmentation can be realized in practice to extend human abilities. The main questions we’re tackling involve both neuroscience and neurotechnology: Is the human brain capable of controlling additional body parts as effectively as it controls biological parts? And if so, what neural signals can be used for this control? We think that extra robotic limbs could be a new form of human augmentation, improving people’s abilities on tasks they can already perform as well as expanding their ability to do things they simply cannot do with their natural human bodies. If humans could easily add and control a third arm, or a third leg, or a few more fingers, they would likely use them in tasks and performances that went beyond the scenarios mentioned here, discovering new behaviors that we can’t yet even imagine. Levels of human augmentation Robotic limbs have come a long way in recent decades, and some are already used by people to enhance their abilities. Most are operated via a joystick or other hand controls. For example, that’s how workers on manufacturing lines wield mechanical limbs that hold and manipulate components of a product. Similarly, surgeons who perform robotic surgery sit at a console across the room from the patient. While the surgical robot may have four arms tipped with different tools, the surgeon’s hands can control only two of them at a time. Could we give these surgeons the ability to control four tools simultaneously? Robotic limbs are also used by people who have amputations or paralysis. That includes people in powered wheelchairs controlling a robotic arm with the chair’s joystick and those who are missing limbs controlling a prosthetic by the actions of their remaining muscles. But a truly mind-controlled prosthesis is a rarity. If humans could easily add and control a third arm, they would likely use them in new behaviors that we can’t yet even imagine. The pioneers in brain-controlled prosthetics are people with tetraplegia, who are often paralyzed from the neck down. Some of these people have boldly volunteered for clinical trials of brain implants that enable them to control a robotic limb by thought alone, issuing mental commands that cause a robot arm to lift a drink to their lips or help with other tasks of daily life. These systems fall under the category of brain-machine interfaces (BMI). Other volunteers have used BMI technologies to control computer cursors, enabling them to type out messages, browse the Internet, and more. But most of these BMI systems require brain surgery to insert the neural implant and include hardware that protrudes from the skull, making them suitable only for use in the lab. Augmentation of the human body can be thought of as having three levels. The first level increases an existing characteristic, in the way that, say, a powered exoskeleton can give the wearer super strength. The second level gives a person a new degree of freedom, such as the ability to move a third arm or a sixth finger, but at a cost—if the extra appendage is controlled by a foot pedal, for example, the user sacrifices normal mobility of the foot to operate the control system. The third level of augmentation, and the least mature technologically, gives a user an extra degree of freedom without taking mobility away from any other body part. Such a system would allow people to use their bodies normally by harnessing some unused neural signals to control the robotic limb. That’s the level that we’re exploring in our research. Deciphering electrical signals from muscles Third-level human augmentation can be achieved with invasive BMI implants, but for everyday use, we need a noninvasive way to pick up brain commands from outside the skull. For many research groups, that means relying on tried-and-true electroencephalography (EEG) technology, which uses scalp electrodes to pick up brain signals. Our groups are working on that approach, but we are also exploring another method: using electromyography (EMG) signals produced by muscles. We’ve spent more than a decade investigating how EMG electrodes on the skin’s surface can detect electrical signals from the muscles that we can then decode to reveal the commands sent by spinal neurons. Electrical signals are the language of the nervous system. Throughout the brain and the peripheral nerves, a neuron “fires” when a certain voltage—some tens of millivolts—builds up within the cell and causes an action potential to travel down its axon, releasing neurotransmitters at junctions, or synapses, with other neurons, and potentially triggering those neurons to fire in turn. When such electrical pulses are generated by a motor neuron in the spinal cord, they travel along an axon that reaches all the way to the target muscle, where they cross special synapses to individual muscle fibers and cause them to contract. We can record these electrical signals, which encode the user’s intentions, and use them for a variety of control purposes. How the Neural Signals Are Decoded A training module [orange] takes an initial batch of EMG signals read by the electrode array [left], determines how to extract signals of individual neurons, and summarizes the process mathematically as a separation matrix and other parameters. With these tools, the real-time decoding module [green] can efficiently extract individual neurons’ sequences of spikes, or “spike trains” [right], from an ongoing stream of EMG signals. Chris Philpot Deciphering the individual neural signals based on what can be read by surface EMG, however, is not a simple task. A typical muscle receives signals from hundreds of spinal neurons. Moreover, each axon branches at the muscle and may connect with a hundred or more individual muscle fibers distributed throughout the muscle. A surface EMG electrode picks up a sampling of this cacophony of pulses. A breakthrough in noninvasive neural interfaces came with the discovery in 2010 that the signals picked up by high-density EMG, in which tens to hundreds of electrodes are fastened to the skin, can be disentangled, providing information about the commands sent by individual motor neurons in the spine. Such information had previously been obtained only with invasive electrodes in muscles or nerves. Our high-density surface electrodes provide good sampling over multiple locations, enabling us to identify and decode the activity of a relatively large proportion of the spinal motor neurons involved in a task. And we can now do it in real time, which suggests that we can develop noninvasive BMI systems based on signals from the spinal cord. A typical muscle receives signals from hundreds of spinal neurons. The current version of our system consists of two parts: a training module and a real-time decoding module. To begin, with the EMG electrode grid attached to their skin, the user performs gentle muscle contractions, and we feed the recorded EMG signals into the training module. This module performs the difficult task of identifying the individual motor neuron pulses (also called spikes) that make up the EMG signals. The module analyzes how the EMG signals and the inferred neural spikes are related, which it summarizes in a set of parameters that can then be used with a much simpler mathematical prescription to translate the EMG signals into sequences of spikes from individual neurons. With these parameters in hand, the decoding module can take new EMG signals and extract the individual motor neuron activity in real time. The training module requires a lot of computation and would be too slow to perform real-time control itself, but it usually has to be run only once each time the EMG electrode grid is fixed in place on a user. By contrast, the decoding algorithm is very efficient, with latencies as low as a few milliseconds, which bodes well for possible self-contained wearable BMI systems. We validated the accuracy of our system by comparing its results with signals obtained concurrently by two invasive EMG electrodes inserted into the user’s muscle. Exploiting extra bandwidth in neural signals Developing this real-time method to extract signals from spinal motor neurons was the key to our present work on controlling extra robotic limbs. While studying these neural signals, we noticed that they have, essentially, extra bandwidth. The low-frequency part of the signal (below about 7 hertz) is converted into muscular force, but the signal also has components at higher frequencies, such as those in the beta band at 13 to 30 Hz, which are too high to control a muscle and seem to go unused. We don’t know why the spinal neurons send these higher-frequency signals; perhaps the redundancy is a buffer in case of new conditions that require adaptation. Whatever the reason, humans evolved a nervous system in which the signal that comes out of the spinal cord has much richer information than is needed to command a muscle. That discovery set us thinking about what could be done with the spare frequencies. In particular, we wondered if we could take that extraneous neural information and use it to control a robotic limb. But we didn’t know if people would be able to voluntarily control this part of the signal separately from the part they used to control their muscles. So we designed an experiment to find out. Neural Control Demonstrated A volunteer exploits unused neural bandwidth to direct the motion of a cursor on the screen in front of her. Neural signals pass from her brain, through spinal neurons, to the muscle in her shin, where they are read by an electromyography (EMG) electrode array on her leg and deciphered in real time. These signals include low-frequency components [blue] that control muscle contractions, higher frequencies [beta band, yellow] with no known biological purpose, and noise [gray]. Chris Philpot; Source: M. Bräcklein et al., Journal of Neural Engineering In our first proof-of-concept experiment, volunteers tried to use their spare neural capacity to control computer cursors. The setup was simple, though the neural mechanism and the algorithms involved were sophisticated. Each volunteer sat in front of a screen, and we placed an EMG system on their leg, with 64 electrodes in a 4-by-10-centimeter patch stuck to their shin over the tibialis anterior muscle, which flexes the foot upward when it contracts. The tibialis has been a workhorse for our experiments: It occupies a large area close to the skin, and its muscle fibers are oriented along the leg, which together make it ideal for decoding the activity of spinal motor neurons that innervate it. These are some results from the experiment in which low- and high-frequency neural signals, respectively, controlled horizontal and vertical motion of a computer cursor. Colored ellipses (with plus signs at centers) show the target areas. The top three diagrams show the trajectories (each one starting at the lower left) achieved for each target across three trials by one user. At bottom, dots indicate the positions achieved across many trials and users. Colored crosses mark the mean positions and the range of results for each target.Source: M. Bräcklein et al., Journal of Neural Engineering We asked our volunteers to steadily contract the tibialis, essentially holding it tense, and throughout the experiment we looked at the variations within the extracted neural signals. We separated these signals into the low frequencies that controlled the muscle contraction and spare frequencies at about 20 Hz in the beta band, and we linked these two components respectively to the horizontal and vertical control of a cursor on a computer screen. We asked the volunteers to try to move the cursor around the screen, reaching all parts of the space, but we didn’t, and indeed couldn’t, explain to them how to do that. They had to rely on the visual feedback of the cursor’s position and let their brains figure out how to make it move. Remarkably, without knowing exactly what they were doing, these volunteers mastered the task within minutes, zipping the cursor around the screen, albeit shakily. Beginning with one neural command signal—contract the tibialis anterior muscle—they were learning to develop a second signal to control the computer cursor’s vertical motion, independently from the muscle control (which directed the cursor’s horizontal motion). We were surprised and excited by how easily they achieved this big first step toward finding a neural control channel separate from natural motor tasks. But we also saw that the control was not accurate enough for practical use. Our next step will be to see if more accurate signals can be obtained and if people can use them to control a robotic limb while also performing independent natural movements. We are also interested in understanding more about how the brain performs feats like the cursor control. In a recent study using a variation of the cursor task, we concurrently used EEG to see what was happening in the user’s brain, particularly in the area associated with the voluntary control of movements. We were excited to discover that the changes happening to the extra beta-band neural signals arriving at the muscles were tightly related to similar changes at the brain level. As mentioned, the beta neural signals remain something of a mystery since they play no known role in controlling muscles, and it isn’t even clear where they originate. Our result suggests that our volunteers were learning to modulate brain activity that was sent down to the muscles as beta signals. This important finding is helping us unravel the potential mechanisms behind these beta signals. Meanwhile, at Imperial College London we have set up a system for testing these new technologies with extra robotic limbs, which we call the MUlti-limb Virtual Environment, or MUVE. Among other capabilities, MUVE will enable users to work with as many as four lightweight wearable robotic arms in scenarios simulated by virtual reality. We plan to make the system open for use by other researchers worldwide. Next steps in human augmentation Connecting our control technology to a robotic arm or other external device is a natural next step, and we’re actively pursuing that goal. The real challenge, however, will not be attaching the hardware, but rather identifying multiple sources of control that are accurate enough to perform complex and precise actions with the robotic body parts. We are also investigating how the technology will affect the neural processes of the people who use it. For example, what will happen after someone has six months of experience using an extra robotic arm? Would the natural plasticity of the brain enable them to adapt and gain a more intuitive kind of control? A person born with six-fingered hands can have fully developed brain regions dedicated to controlling the extra digits, leading to exceptional abilities of manipulation. Could a user of our system develop comparable dexterity over time? We’re also wondering how much cognitive load will be involved in controlling an extra limb. If people can direct such a limb only when they’re focusing intently on it in a lab setting, this technology may not be useful. However, if a user can casually employ an extra hand while doing an everyday task like making a sandwich, then that would mean the technology is suited for routine use. Whatever the reason, humans evolved a nervous system in which the signal that comes out of the spinal cord has much richer information than is needed to command a muscle. Other research groups are pursuing the same neuroscience questions. Some are experimenting with control mechanisms involving either scalp-based EEG or neural implants, while others are working on muscle signals. It is early days for movement augmentation, and researchers around the world have just begun to address the most fundamental questions of this emerging field. Two practical questions stand out: Can we achieve neural control of extra robotic limbs concurrently with natural movement, and can the system work without the user’s exclusive concentration? If the answer to either of these questions is no, we won’t have a practical technology, but we’ll still have an interesting new tool for research into the neuroscience of motor control. If the answer to both questions is yes, we may be ready to enter a new era of human augmentation. For now, our (biological) fingers are crossed.

  • Why EVs Aren't a Climate Change Panacea
    by Robert N. Charette on 28. January 2023. at 15:44

    “Electric cars will not save the climate. It is completely wrong,” Fatih Birol, Executive Director of the International Energy Agency (IEA), has stated. If Birol were from Maine, he might have simply observed, “You can’t get there from here.” This is not to imply in any way that electric vehicles are worthless. Analysis by the International Council on Clean Transportation (ICCT) argues that EVs are the quickest means to decarbonize motorized transport. However, EVs are not by themselves in any way going to achieve the goal of net zero by 2050. There are two major reasons for this: first, EVs are not going to reach the numbers required by 2050 to hit their needed contribution to net zero goals, and even if they did, a host of other personal, social and economic activities must be modified to reach the total net zero mark. For instance, Alexandre Milovanoff at the University of Toronto and his colleagues’ research (which is described in depth in a recent Spectrum article) demonstrates the U.S. must have 90 percent of its vehicles, or some 350 million EVs, on the road by 2050 in order to hit its emission targets. The likelihood of this occurring is infinitesimal. Some estimates indicate that about 40 percent of vehicles on US roads will be ICE vehicles in 2050, while others are less than half that figure. For the U.S. to hit the 90 percent EV target, sales of all new ICE vehicles across the U.S. must cease by 2038 at the latest, according to research company BloombergNEF (BNEF). Greenpeace, on the other hand, argues that sales of all diesel and petrol vehicles, including hybrids, must end by 2030 to meet such a target. However, achieving either goal would likely require governments offering hundreds of billions of dollars, if not trillions, in EV subsidies to ICE owners over the next decade, not to mention significant investments in EV charging infrastructure and the electrical grid. ICE vehicle households would also have to be convinced that they would not be giving activities up by becoming EV-only households. As a reality check, current estimates for the number of ICE vehicles still on the road worldwide in 2050 range from a low of 1.25 billion to more than 2 billion. Even assuming that the required EV targets were met in the U.S. and elsewhere, it still will not be sufficient to meet net zero 2050 emission targets. Transportation accounts for only 27 percent of greenhouse gas emissions (GHG) in the U.S.; the sources of the other 73 percent of GHG emissions must be reduced as well. Even in the transportation sector, more than 15 percent of the GHG emissions are created by air and rail travel and shipping. These will also have to be decarbonized. Nevertheless, for EVs themselves to become true zero emission vehicles, everything in their supply chain from mining to electricity production must be nearly net-zero emission as well. Today, depending on the EV model, where it charges, and assuming it is a battery electric and not a hybrid vehicle, it may need to be driven anywhere from 8,400 to 13,500 miles, or controversially, significantly more to generate less GHG emissions than an ICE vehicle. This is due to the 30 to 40 percent increase in emissions EVs create in comparison to manufacturing an ICE vehicle, mainly from its battery production. In states (or countries) with a high proportion of coal-generated electricity, the miles needed to break-even climb more. In Poland and China, for example, an EV would need to be driven 78,700 miles to break-even. Just accounting for miles driven, however, BEVs cars and trucks appear cleaner than ICE equivalents nearly everywhere in the U.S. today. As electricity increasingly comes from renewables, total electric vehicle GHG emissions will continue downward, but that will take at least a decade or more to happen everywhere across the U.S. (assuming policy roadblocks disappear), and even longer elsewhere. If EVs aren’t enough, what else is needed? Given that EVs, let alone the rest of the transportation sector, likely won’t hit net zero 2050 targets, what additional actions are being advanced to reduce GHG emissions? A high priority, says IEA’s Birol, is investment in across-the-board energy-related technology research and development and their placement into practice. According to Birol, “IEA analysis shows that about half the reductions to get to net zero emissions in 2050 will need to come from technologies that are not yet ready for market.” Many of these new technologies will be aimed at improving the efficient use of fossil fuels, which will not be disappearing anytime soon. The IEA expects that energy efficiency improvement, such as the increased use of variable speed electric motors, will lead to a 40 percent reduction in energy-related GHG emissions over the next twenty years. But even if these hoped for technological improvements arrive, and most certainly if they do not, the public and businesses are expected to take more energy conscious decisions to close what the United Nations says is the expected 2050 “emissions gap.” Environmental groups foresee the public needing to use electrified mass transit, reduce long-haul flights for business as well as pleasure), increase telework, walk and cycle to work or stores, change their diet to eat more vegetables, or if absolutely needed, drive only small EVs. Another expectation is that homeowners and businesses will become “fully electrified” by replacing oil, propane and gas furnaces with heat pumps along with gas fired stoves as well as installing solar power and battery systems. Dronning Louise’s Bro (Queen Louise’s Bridge) connects inner Copenhagen and Nørrebro and is frequented by many cyclists and pedestrians every day.Frédéric Soltan/Corbis/Getty Images Underpinning the behavioral changes being urged (or encouraged by legislation) is the notion of rejecting the current car-centric culture and completely rethinking what personal mobility means. For example, researchers at University of Oxford in the U.K. argue that, “Focusing solely on electric vehicles is slowing down the race to zero emissions.” Their study found “emissions from cycling can be more than 30 times lower for each trip than driving a fossil fuel car, and about ten times lower than driving an electric one.” If just one out of five urban residents in Europe permanently changed from driving to cycling, emissions from automobiles would be cut by 8 percent, the study reports. Even then, Oxford researchers concede, breaking the car’s mental grip on people is not going to be easy, given the generally poor state of public transportation across much of the globe. Behavioral change is hard How willing are people to break their car dependency and other energy-related behaviors to address climate change? The answer is perhaps some, but maybe not too much. A Pew Research Center survey taken in late 2021 of seventeen countries with advanced economies indicated that 80 percent of those surveyed were willing to alter how then live and work to combat climate change. However, a Kanter Public survey of ten of the same countries taken at about the same time gives a less positive view, with only 51 percent of those polled stating they would alter their lifestyles. In fact, some 74 percent of those polled indicated they were already “proud of what [they are] currently doing” to combat climate change. What both polls failed to explore are what behaviors specifically would respondents being willing to permanently change or give up in their lives to combat climate change? For instance, how many urban dwellers, if told that they must forever give up their cars and instead walk, cycle or take public transportation, would willingly agree to doing so? And how many of those who agreed, would also consent to go vegetarian, telework, and forsake trips abroad for vacation? It is one thing to answer a poll indicating a willingness to change, and quite another to “walk the talk” especially if there are personal, social or economic inconveniences or costs involved. For instance, recent U.S. survey information shows that while 22 percent of new car buyers expressed interest in a battery electric vehicle (BEV), only 5 percent actually bought one. Granted, there are several cities where living without a vehicle is doable, like Utrecht in the Netherlands where in 2019 48 percent of resident trips were done by cycling or London, where nearly two-thirds of all trips taken that same year were are made by walking, cycling or public transportation. Even a few US cities it might be livable without a car. The world’s largest bike parking facility, Stationsplein Bicycle Parking near Utrecht Central Station in Utrecht, Netherlands has 12,500 parking places.Abdullah Asiran/Anadolu Agency/Getty Images However, in countless other urban areas, especially across most of the U.S., even those wishing to forsake owning a car would find it very difficult to do so without a massive influx of investment into all forms of public transport and personal mobility to eliminate the scores of US transit deserts. As Tony Dutzik of the environmental advocacy group Frontier Group has written that in the U.S. “the price of admission to jobs, education and recreation is owning a car.” That’s especially true if you are a poor urbanite. Owning a reliable automobile has long been one of the only successful means of getting out of poverty. Massive investment in new public transportation in the U.S. in unlikely, given its unpopularity with politicians and the public alike. This unpopularity has translated into aging and poorly-maintained bus, train and transit systems that few look forward to using. The American Society of Civil Engineers gives the current state of American public transportation a grade of D- and says today’s $176 billion investment backlog is expected to grow to $250 billion through 2029. While the $89 billion targeted to public transportation in the recently passed Infrastructure Investment and Jobs Act will help, it also contains more than $351 billion for highways over the next five years. Hundreds of billions in annual investment are needed not only to fix the current public transport system but to build new ones to significantly reduce car dependency in America. Doing so would still take decades to complete. Yet, even if such an investment were made in public transportation, unless its service is competitive with an EV or ICE vehicle in terms of cost, reliability and convenience, it will not be used. With EVs costing less to operate than ICE vehicles, the competitive hurdle will increase, despite the moves to offer free transit rides. Then there is the social stigma attached riding public transportation that needs to be overcome as well. A few experts proclaim that ride-sharing using autonomous vehicles will separate people from their cars. Some even claim such AV sharing signals the both the end of individual car ownership as well as the need to invest in public transportation. Both outcomes are far from likely. Other suggestions include redesigning cities to be more compact and more electrified, which would eliminate most of the need for personal vehicles to meet basic transportation needs. Again, this would take decades and untold billions of dollars to do so at the scale needed. The San Diego, California region has decided to spend $160 billion as a way to meet California’s net zero objectives to create “a collection of walkable villages serviced by bustling (fee-free) train stations and on-demand shuttles” by 2050. However, there has been public pushback over how to pay for the plan and its push to decrease personal driving by imposing a mileage tax. According to University of Michigan public policy expert John Leslie King, the challenge of getting to net zero by 2050 is that each decarbonization proposal being made is only part of the overall solution. He notes, “You must achieve all the goals, or you don’t win. The cost of doing each is daunting, and the total cost goes up as you concatenate them.” Concatenated costs also include changing multiple personal behaviors. It is unlikely that automakers, having committed more than a trillion dollars so far to EVs and charging infrastructure, are going to support depriving the public of the activities they enjoy today as a price they pay to shift to EVs. A war on EVs will be hard fought. Should Policies Nudge or Shove? The cost concatenation problem arises not only at a national level, but at countless local levels as well. Massachusetts’ new governor Maura Healey, for example, has set ambitious goals of having at least 1 million EVs on the road, converting 1 million fossil-fuel burning furnaces in homes and buildings to heat-pump systems, and the state achieving a 100 percent clean electricity supply by 2030. The number of Massachusetts households that can afford or are willing to buy an EV and or convert their homes to a heat pump system in the next eight years, even with a current state median household income of $89,000 and subsidies, is likely significantly smaller than the targets set. So, what happens if by 2030, the numbers are well below target, not only in Massachusetts, but other states like California, New York, or Illinois that also have aggressive GHG emission reduction targets? Will governments move from encouraging behavioral changes to combat climate change or, in frustration or desperation, begin mandating them? And if they do, will there be a tipping point that spurs massive social resistance? For example, dairy farmers in the Netherlands have been protesting plans by the government to force them to cut their nitrogen emissions. This will require dairy farms to reduce their livestock, which will make it difficult or impossible to stay in business. The Dutch government estimates 11,200 farms must close, and another 17,600 to reduce their livestock numbers. The government says farmers who do not comply will have their farms taken away by forced buyouts starting in 2023. California admits getting to a zero-carbon transportation system by 2045 means car owners must travel 25 percent below 1990 levels by 2030 and even more by 2045. If drivers fail to do so, will California impose weekly or monthly driving quotas, or punitive per mile driving taxes, along with mandating mileage data from vehicles ever-more connected to the Internet? The San Diego backlash over a mileage tax may be just the beginning. “EVs,” notes King, “pull an invisible trailer filled with required major lifestyle changes that the public is not yet aware of.” When it does, do not expect the public to acquiesce quietly. In the final article of the series, we explore potential unanticipated consequences of transitioning to EVs at scale.

  • Here’s How Apptronik Is Making Its Humanoid Robot
    by Evan Ackerman on 28. January 2023. at 14:00

    Apptronik, a Texas-based robotics company with its roots in the Human Centered Robotics Lab at University of Texas at Austin, has spent the last few years working toward a practical, general-purpose humanoid robot. By designing its robot (called Apollo) completely from the ground up, including electronics and actuators, Apptronik is hoping that it’ll be able to deliver something affordable, reliable, and broadly useful. But at the moment, the most successful robots are not generalized systems—they’re uni-taskers, robots that can do one specific task very well but more or less nothing else. A general purpose robot, especially one in a human form factor, would have enormous potential. But the challenge is enormous, too. So why does Apptronik believe that it has the answer to general-purpose humanoid robots with Apollo? To find out, we spoke with Apptronik’s founders, CEO Jeff Cardenas and CTO Nick Paine. IEEE Spectrum: Why are you developing a general-purpose robot when the most successful robots in the supply chain focus on specific tasks? Nick Paine: It’s about our level of ambition. A specialized tool is always going to beat a general tool at one task, but if you’re trying to solve 10 tasks, or 100 tasks, or 1,000 tasks, it’s more logical to put your effort into a single versatile hardware platform with specialized software that solves a myriad of different problems. How do you know that you’ve reached an inflection point where building a general-purpose commercial humanoid is now realistic, when it wasn’t before? Paine: There are a number of different things. For one, Moore’s Law has slowed down, but computers are evolving in a way that has helped advance the complexity of algorithms that can be deployed on mobile systems. Also, there are new algorithms that have been developed recently that have enabled advancements in legged locomotion, machine vision, and manipulation. And along with algorithmic improvements, there have been sensing improvements. All of this has influenced the ability to design these types of legged systems for unstructured environments. Jeff Cardenas: I think it’s taken decades for it to be the right time. After many, many iterations as a company, we’ve gotten to the point where we’ve said, “Okay, we see all the pieces to where we believe we can build a robust, capable, affordable system that can really go out and do work.” It’s still the beginning, but we’re now at an inflection point where there’s demand from the market, and we can get these out into the world. The reason that I got into robotics is that I was sick of seeing robots just dancing all the time. I really wanted to make robots that could be useful in the world. —Nick Paine, CTO of Apptronik Why did you need to develop and test 30 different actuators for Apollo, and how did you know that the 30th actuator was the right one? Paine: The reason for the variety was that we take a first-principles approach to designing robotic systems. The way you control the system really impacts how you design the system, and that goes all the way down to the actuators. A certain type of actuator is not always the silver bullet: Every actuator has its strengths and weaknesses, and we’ve explored that space to understand the limitations of physics to guide us toward the right solutions. With your focus on making a system that’s affordable, how much are you relying on software to help you minimize hardware costs? Paine: Some groups have tried masking the deficiencies of cheap, low-quality hardware with software. That’s not at all the approach we’re taking. We are leaning on our experience building these kinds of systems over the years from a first-principles approach. Building from the core requirements for this type of system, we’ve found a solution that hits our performance targets while also being far more mass producible compared to anything we’ve seen in this space previously. We’re really excited about the solution that we’ve found. How much effort are you putting into software at this stage? How will you teach Apollo to do useful things? Paine: There are some basic applications that we need to solve for Apollo to be fundamentally useful. It needs to be able to walk around, to use its upper body and its arms to interact with the environment. Those are the core capabilities that we’re working on, and once those are at a certain level of maturity, that’s where we can open up the platform for third-party application developers to build on top of that. Cardenas: If you look at Willow Garage with the PR2, they had a similar approach, which was to build a solid hardware platform, create a powerful API, and then let others build applications on it. But then you’re really putting your destiny in the hands of other developers. One of the things that we learned from that is if you want to enable that future, you have to prove that initial utility. So what we’re doing is handling the full-stack development on the initial applications, which will be targeting supply chain and logistics. NASA officials have expressed their interest in Apptronik developing “technology and talent that will sustain us through the Artemis program and looking forward to Mars.” “In robotics, seeing is believing. You can say whatever you want, but you really have to prove what you can do, and that’s been our focus. We want to show versus tell.” —Jeff Cardenas, CEO of Apptronik Apptronik plans for the alpha version of Apollo to be ready in March, in time for a sneak peak for a small audience at SXSW. From there, the alpha Apollos will go through pilots as Apptronik collects feedback to develop a beta version that will begin larger deployments. The company expects these programs to lead to full a gamma version and full production runs by the end of 2024.

  • Curing the AI Way
    by Willie Jones on 27. January 2023. at 21:00

    The Big Picture features technology through the lens of photographers. Every month, IEEE Spectrum selects the most stunning technology images recently captured by photographers around the world. We choose images that reflect an important advance, or a trend, or that are just mesmerizing to look at. We feature all images on our site, and one also appears on our monthly print edition. Enjoy the latest images, and if you have suggestions, leave a comment below. The Wurst Use of AI From the time the ancient Sumerians started making sausage around 4,000 years ago, the process has been the province of artisans dedicated to the craft of preserving meat so it remained safe to eat for as long as possible. Yet even traditional methods can stand to be improved on from time to time. Katharina Koch of the Landfleischerei Koch in Calden, Germany [right], has retained ancient customs such as the clay chambers in which Ahle sausages ripen while also fine-tuning the conditions under which the meats are cured (such as temperature and moisture level) via AI algorithms. The digital modifications she and scientists at the nearby University of Kassel have developed replicate the production methods that have been passed down for generations. So, instead of spending nearly a year manually monitoring the meats’ maturation process, a sausage maker using the new AI methods will be able to set it and forget it. Uwe Zucchi/picture alliance/Getty Images Electronic Pill Fueled by What You Eat People with diabetes will usually prick their fingers multiple times a day in order to get readings on the amount of glucose (the type of sugar the body uses for fuel) that is in their bloodstream. But researchers at the University of California, San Diego, have developed a bloodless method for tracking blood sugar and other chemical metabolites in the gastrointestinal tract that can be used to infer the person’s relative state of health. Their solution to the finger-pricking problem: an electronic pill capable of sensing metabolite levels and transmitting data wirelessly every 5 seconds over a span of several hours. So, instead of snapshots of how the body is reacting to stimuli like food, clinicians will get a steady stream of data. The major innovation boasted by the UCSD team is that their pill draws power from a fuel cell that runs on the glucose in the gut, instead of relying on a battery laden with potentially harmful chemicals. David Baillot/UC San Diego Stretchy Circuits, Wired With Sound The phrase musical arrangement has long referred to the work of art that results from a composition being adapted for different instruments or voices. But going forward, sound will get in on the act of arranging. Engineers at the Korea Advanced Institute of Science and Technology report that they used sound waves to disperse metallic droplets embedded in a polymer in order to make flexible circuits. This “musical arrangement” yields an archipelago of droplets spaced so that electrical conductivity is maintained even when the polymer is bent or twisted. Korea Advanced Institute of Science and Technology A Well-Balanced Machine The relative proportions of a bee’s body and its wings say that, at least in theory, it shouldn’t be able to fly. But where would we be if bees were incapable of flitting from flower to flower, collecting nectar and spreading pollen? Roboticists at ETH Zurich, taking a page from nature, say they too have created a machine whose movement seems to defy the laws of physics. The 1.TK-meter-long gadget, called Cubli, balances on a single point, with a single internal reaction wheel whose spin keeps the unit upright. The way this is supposed to work, the Cubli would need a wheel to manage pitch and another to handle roll. But the Zurich team worked out the Cubli’s dimensions so the one wheel is capable of counterbalancing any forces that would topple the machine. ETH Zurich

  • Video Friday: Such a Showoff
    by Evan Ackerman on 27. January 2023. at 17:35

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREA RoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCE CLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZIL RSS 2023: 10–14 July 2023, DAEGU, KOREA ICRA 2023: 29 May–2 June 2023, LONDON Robotics Summit & Expo: 10–11 May 2023, BOSTON Enjoy today’s videos! Sometimes, watching a robot almost but not quite fail is way cooler than watching it succeed. [ Boston Dynamics ] Simulation-based reinforcement learning approaches are leading the next innovations in legged robot control. However, the resulting control policies are still not applicable on soft and deformable terrains, especially at high speed. To this end, we introduce a versatile and computationally efficient granular media model for reinforcement learning. We applied our techniques to the Raibo robot, a dynamic quadrupedal robot developed in-house. The trained networks demonstrated high-speed locomotion capabilities on deformable terrains. [ Kaist ] A lonely badminton player’s best friend. [ YouTube ] Come along for the (autonomous) ride with Yorai Shaoul, and see what a day is like for a Ph.D. student at Carnegie Mellon University Robotics Institute. [ AirLab ] In this video we showcase a Husky-based robot that’s preparing for its journey across the continent to live with a family of alpacas on Formant’s farm in Denver, Colorado. [ Clearpath ] Arm prostheses are becoming smarter, more customized, and more versatile. We’re closer to replicating everyday movements than ever before, but we’re not there yet. Can you do better? Join teams to revolutionize prosthetics and build a world without barriers. [ Cybathlon 2024 ] RB-VOGUI is the robot developed for this success story and is mainly responsible for the navigation and collection of high-quality data, which is transferred in real time to the relevant personnel. After the implementation of the fleet of autonomous mobile robots, only one operator is needed to monitor the fleet from a control centre. [ Robotnik ] Bagging groceries isn’t only a physical task: knowing how to order the items to prevent damage requires human-like intelligence. Also, bin packing. [ Sanctuary AI ] Seems like lidar is everywhere nowadays, but it started at NASA back in the 1980s. [ NASA ] This GRASP on Robotics talk is by Frank Dellaert at Georgia Tech: “Factor Graphs for Perception and Action.” Factor graphs have been very successful in providing a lingua franca in which to phrase robotics perception and navigation problems. In this talk I will revisit some of those successes, also discussed in depth in a recent review article. However, I will focus on our more recent work in the talk, centered on using factor graphs for action. I will discuss our efforts in motion planning, trajectory optimization, optimal control, and model-predictive control, highlighting SCATE, our recent work on collision avoidance for autonomous spacecraft. [ UPenn ]

  • The Costly Impact of Non-Strategic Patents
    by UnitedLex on 27. January 2023. at 16:30

    The five largest auto manufacturers will face massive U.S. patent fees within the next five years. This report examines auto industry lapse trends and how a company’s decisions on keeping, selling or pruning patents can greatly impact its cost savings and revenue generation opportunities. Patent lapse strategies can help companies in any industry out-maneuver the competition. Volume 2 of the U.S. Patent Lapse Series highlights how such decisions, especially during uncertain economic times, can impact the bottom line exponentially within a few years. Download the report to find out: What is the patent lapse strategy leading Honda to save millions each year? How has Toyota’s patent lapse rate changed in the last 10-years? How do the patent lapse rate and associated costs vary between top OEMs? Where are the opportunities for portfolio optimization during uncertain economic times? Get the insights.

  • Forecasting the Ice Loss of Greenland’s Glaciers With Viscoelastic Modeling
    by Alan Petrillo on 27. January 2023. at 13:00

    This sponsored article is brought to you by COMSOL. To someone standing near a glacier, it may seem as stable and permanent as anything on Earth can be. However, Earth’s great ice sheets are always moving and evolving. In recent decades, this ceaseless motion has accelerated. In fact, ice in polar regions is proving to be not just mobile, but alarmingly mortal. Rising air and sea temperatures are speeding up the discharge of glacial ice into the ocean, which contributes to global sea level rise. This ominous progression is happening even faster than anticipated. Existing models of glacier dynamics and ice discharge underestimate the actual rate of ice loss in recent decades. This makes the work of Angelika Humbert, a physicist studying Greenland’s Nioghalvfjerdsbræ outlet glacier, especially important — and urgent. As the leader of the Modeling Group in the Section of Glaciology at the Alfred Wegener Institute (AWI) Helmholtz Centre for Polar and Marine Research in Bremerhaven, Germany, Humbert works to extract broader lessons from Nioghalvfjerdsbræ’s ongoing decline. Her research combines data from field observations with viscoelastic modeling of ice sheet behavior. Through improved modeling of elastic effects on glacial flow, Humbert and her team seek to better predict ice loss and the resulting impact on global sea levels. She is acutely aware that time is short. “Nioghalvfjerdsbræ is one of the last three ‘floating tongue’ glaciers in Greenland,” explains Humbert. “Almost all of the other floating tongue formations have already disintegrated.” One Glacier That Holds 1.1 Meter of Potential Global Sea Level Rise The North Atlantic island of Greenland is covered with the world’s second largest ice pack after that of Antarctica. (Fig. 1) Greenland’s sparsely populated landscape may seem unspoiled, but climate change is actually tearing away at its icy mantle. The ongoing discharge of ice into the ocean is a “fundamental process in the ice sheet mass-balance,” according to a 2021 article in Communications Earth & Environment by Humbert and her colleagues. (Ref. 1) The article notes that the entire Northeast Greenland Ice Stream contains enough ice to raise global sea levels by 1.1 meters. While the entire formation is not expected to vanish, Greenland’s overall ice cover has declined dramatically since 1990. This process of decay has not been linear or uniform across the island. Nioghalvfjerdsbræ, for example, is now Greenland’s largest outlet glacier. The nearby Petermann Glacier used to be larger, but has been shrinking even more quickly. (Ref. 2) Existing Models Underestimate the Rate of Ice Loss Greenland’s overall loss of ice mass is distinct from “calving”, which is the breaking off of icebergs from glaciers’ floating tongues. While calving does not directly raise sea levels, the calving process can quicken the movement of land-based ice toward the coast. Satellite imagery from the European Space Agency (Fig. 2) has captured a rapid and dramatic calving event in action. Between June 29 and July 24 of 2020, a 125 km2 floating portion of Nioghalvfjerdsbræ calved into many separate icebergs, which then drifted off to melt into the North Atlantic. Direct observations of ice sheet behavior are valuable, but insufficient for predicting the trajectory of Greenland’s ice loss. Glaciologists have been building and refining ice sheet models for decades, yet, as Humbert says, “There is still a lot of uncertainty around this approach.” Starting in 2014, the team at AWI joined 14 other research groups to compare and refine their forecasts of potential ice loss through 2100. The project also compared projections for past years to ice losses that actually occurred. Ominously, the experts’ predictions were “far below the actually observed losses” since 2015, as stated by Martin Rückamp of AWI. (Ref. 3) He says, “The models for Greenland underestimate the current changes in the ice sheet due to climate change.” Viscoelastic Modeling to Capture Fast-Acting Forces Angelika Humbert has personally made numerous trips to Greenland and Antarctica to gather data and research samples, but she recognizes the limitations of the direct approach to glaciology. “Field operations are very costly and time consuming, and there is only so much we can see,” she says. “What we want to learn is hidden inside a system, and much of that system is buried beneath many tons of ice! We need modeling to tell us what behaviors are driving ice loss, and also to show us where to look for those behaviors.” Since the 1980s, researchers have relied on numerical models to describe and predict how ice sheets evolve. “They found that you could capture the effects of temperature changes with models built around a viscous power law function,” Humbert explains. “If you are modeling stable, long-term behavior, and you get your viscous deformation and sliding right, your model can do a decent job. But if you are trying to capture loads that are changing on a short time scale, then you need a different approach.” To better understand the Northeast Greenland Ice Stream glacial system and its discharge of ice into the ocean, researchers at the Alfred Wegener Institute have developed an improved viscoelastic model to capture how tides and subglacial topography contribute to glacial flow. What drives short-term changes in the loads that affect ice sheet behavior? Humbert and the AWI team focus on two sources of these significant but poorly understood forces: oceanic tidal movement under floating ice tongues (such as the one shown in Fig. 2) and the ruggedly uneven landscape of Greenland itself. Both tidal movement and Greenland’s topography help determine how rapidly the island’s ice cover is moving toward the ocean. To investigate the elastic deformation caused by these factors, Humbert and her team built a viscoelastic model of Nioghalvfjerdsbræ in the COMSOL Multiphysics software. The glacier model’s geometry is based on data from radar surveys. The model solved underlying equations for a viscoelastic Maxwell material across a 2D model domain consisting of a vertical cross section along the blue line shown in Fig. 3. The simulated results were then compared to actual field measurements of glacier flow obtained by four GPS stations, one of which is shown in Fig. 3. How Cycling Tides Affect Glacier Movement The tides around Greenland typically raise and lower the coastal water line between 1 and 4 meters per cycle. This action exerts tremendous force on outlet glaciers’ floating tongues, and these forces are transmitted into the land-based parts of the glacier as well. AWI’s viscoelastic model explores how these cyclical changes in stress distribution can affect the glacier’s flow toward the sea. The charts in Figure 4 present the measured tide-induced stresses acting on Nioghalvfjerdsbræ at three locations, superimposed on stresses predicted by viscous and viscoelastic simulations. Chart a shows how displacements decline further when they are 14 kilometers inland from the grounding line (GL). Chart b shows that cyclical tidal stresses lessen at GPS-hinge, located in a bending zone near the grounding line between land and sea. Chart c shows activity at the location called GPS-shelf, which is mounted on ice floating in the ocean. Accordingly, it shows the most pronounced waveform of cyclical tidal stresses acting on the ice. “The floating tongue is moving up and down, which produces elastic responses in the land-based portion of the glacier,” says Julia Christmann, a mathematician on the AWI team who plays a key role in constructing their simulation models. “There is also a subglacial hydrological system of liquid water between the inland ice and the ground. This basal water system is poorly known, though we can see evidence of its effects.” For example, chart a shows a spike in stresses below a lake sitting atop the glacier. “Lake water flows down through the ice, where it adds to the subglacial water layer and compounds its lubricating effect,” Christmann says. The plotted trend lines highlight the greater accuracy of the team’s new viscoelastic simulations, as compared to purely viscous models. As Christmann explains, “The viscous model does not capture the full extent of changes in stress, and it does not show the correct amplitude. (See chart c in Fig. 4.) In the bending zone, we can see a phase shift in these forces due to elastic response.” Christmann continues, “You can only get an accurate model if you account for viscoelastic ‘spring’ action.” Modeling Elastic Strains from Uneven Landscapes The crevasses in Greenland’s glaciers reveal the unevenness of the underlying landscape. Crevasses also provide further evidence that glacial ice is not a purely viscous material. “You can watch a glacier over time and see that it creeps, as a viscous material would,” says Humbert. However, a purely viscous material would not form persistent cracks the way that ice sheets do. “From the beginning of glaciology, we have had to accept the reality of these crevasses,” she says. The team’s viscoelastic model provides a novel way to explore how the land beneath Nioghalvfjerdsbræ facilitates the emergence of crevasses and affects glacial sliding. “When we did our simulations, we were surprised at the amount of elastic strain created by topography,” Christmann explains. “We saw these effects far inland, where they would have nothing to do with tidal changes.” Figure 6 shows how vertical deformation in the glacier corresponds to the underlying landscape and helps researchers understand how localized elastic vertical motion affects the entire sheet’s horizontal movement. Shaded areas indicate velocity in that part of the glacier compared to its basal velocity. Blue zones are moving vertically at a slower rate than the sections that are directly above the ground, indicating that the ice is being compressed. Pink and purple zones are moving faster than ice at the base, showing that ice is being vertically stretched. These simulation results suggest that the AWI team’s improved model could provide more accurate forecasts of glacial movements. “This was a ‘wow’ effect for us,” says Humbert. “Just as the up and down of the tides creates elastic strain that affects glacier flow, now we can capture the elastic part of the up and down over bedrock as well.” Scaling Up as the Clock Runs Down The improved viscoelastic model of Nioghalvfjerdsbræ is only the latest example of Humbert’s decades-long use of numerical simulation tools for glaciological research. “COMSOL is very well suited to our work,” she says. “It is a fantastic tool for trying out new ideas. The software makes it relatively easy to adjust settings and conduct new simulation experiments without having to write custom code.” Humbert’s university students frequently incorporate simulation into their research. Examples include Julia Christmann’s PhD work on the calving of ice shelves, and another degree project that modeled the evolution of the subglacial channels that carry meltwater from the surface to the ice base. The AWI team is proud of their investigative work, but they are fully cognizant of just how much information about the world’s ice cover remains unknown — and that time is short. “We cannot afford Maxwell material simulations of all of Greenland,” Humbert concedes. “We could burn years of computational time and still not cover everything. But perhaps we can parameterize the localized elastic response effects of our model, and then implement it at a larger scale,” she says. This scale defines the challenges faced by 21st-century glaciologists. The size of their research subjects is staggering, and so is the global significance of their work. Even as their knowledge is growing, it is imperative that they find more information, more quickly. Angelika Humbert would welcome input from people in other fields who study viscoelastic materials. “If other COMSOL users are dealing with fractures in Maxwell materials, they probably face some of the same difficulties that we have, even if their models have nothing to do with ice!” she says. “Maybe we can have an exchange and tackle these issues together.” Perhaps, in this spirit, we who benefit from the work of glaciologists can help shoulder some of the vast and weighty challenges they bear. References J. Christmann, V. Helm, S.A. Khan, A. Humbert, et al. “Elastic Deformation Plays a Non-Negligible Role in Greenland’s Outlet Glacier Flow“, Communications Earth & Environment, vol. 2, no. 232, 2021. European Space Agency, “Spalte Breaks Up“, September 2020. Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, “Model comparison: Experts calculate future ice loss and the extent to which Greenland and the Antarctic will contribute to sea-level rise“, September 2020.

  • Apptronik Developing General-Purpose Humanoid Robot
    by Evan Ackerman on 27. January 2023. at 13:00

    There’s a handful of robotics companies currently working on what could be called general-purpose humanoid robots. That is, human-size, human-shaped robots with legs for mobility and arms for manipulation that can (or, may one day be able to) perform useful tasks in environments designed primarily for humans. The value proposition is obvious—drop-in replacement of humans for dull, dirty, or dangerous tasks. This sounds a little ominous, but the fact is that people don’t want to be doing the jobs that these robots are intended to do in the short term, and there just aren’t enough people to do these jobs as it is. We tend to look at claims of commercializable general-purpose humanoid robots with some skepticism, because humanoids are really, really hard. They’re still really hard in a research context, which is usually where things have to get easier before anyone starts thinking about commercialization. There are certainly companies out there doing some amazing work toward practical legged systems, but at this point, “practical” is more about not falling over than it is about performance or cost effectiveness. The overall approach toward solving humanoids in this way tends to be to build something complex and expensive that does what you want, with the goal of cost reduction over time to get it to a point where it’s affordable enough to be a practical solution to a real problem. Apptronik, based in Austin, Texas, is the latest company to attempt to figure out how to make a practical general-purpose robot. Its approach is to focus on things like cost and reliability from the start, developing (for example) its own actuators from scratch in a way that it can be sure will be cost effective and supply-chain friendly. Apptronik’s goal is to develop a platform that costs well under US $100,000 of which it hopes to be able to deliver a million by 2030, although the plan is to demonstrate a prototype early this year. Based on what we’ve seen of commercial humanoid robots recently, this seems like a huge challenge. And in part two of this story (to be posted tomorrow), we will be talking in depth to Apptronik’s cofounders to learn more about how they’re going to make general-purpose humanoids happen. First, though, some company history. Apptronik spun out from the Human Centered Robotics Lab at the University of Texas at Austin in 2016, but the company traces its robotics history back a little farther, to 2015’s DARPA Robotics Challenge. Apptronik’s CTO and cofounder, Nick Paine, was on the NASA-JSC Valkyrie DRC team, and Apptronik’s first contract was to work on next-gen actuation and controls for NASA. Since then, the company has been working on robotics projects for a variety of large companies. In particular, Apptronik developed Astra, a humanoid upper body for dexterous bimanual manipulation that’s currently being tested for supply-chain use. But Apptronik has by no means abandoned its NASA roots. In 2019, NASA had plans for what was essentially going to be a Valkyrie 2, which was to be a ground-up redesign of the Valkyrie platform. As with many of the coolest NASA projects, the potential new humanoid didn’t survive budget prioritization for very long, but even at the time it wasn’t clear to us why NASA wanted to build its own humanoid rather than asking someone else to build one for it considering how much progress we’ve seen with humanoid robots over the last decade. Ultimately, NASA decided to move forward with more of a partnership model, which is where Apptronik fits in—a partnership between Apptronik and NASA will help accelerate commercialization of Apollo. “We recognize that Apptronik is building a production robot that’s designed for terrestrial use,” says NASA’s Shaun Azimi, who leads the Dexterous Robotics Team at NASA’s Johnson Space Center. “From NASA’s perspective, what we’re aiming to do with this partnership is to encourage the development of technology and talent that will sustain us through the Artemis program and looking forward to Mars.” Apptronik is positioning Apollo as a high-performance, easy-to-use, and versatile system. It is imagining an “iPhone of robots.” “Apollo is the robot that we always wanted to build,” says Jeff Cardenas, Apptronik cofounder and CEO. This new humanoid is the culmination of an astonishing amount of R&D, all the way down to the actuator level. “As a company, we’ve built more than 30 unique electric actuators,” Cardenas explains. “You name it, we’ve tried it. Liquid cooling, cable driven, series elastic, parallel elastic, quasi-direct drive…. And we’ve now honed our approach and are applying it to commercial humanoids.” Apptronik’s emphasis on commercialization gives it a much different perspective on robotics development than you get when focusing on pure research the way that NASA does. To build a commercial product rather than a handful of totally cool but extremely complex bespoke humanoids, you need to consider things like minimizing part count, maximizing maintainability and robustness, and keeping the overall cost manageable. “Our starting point was figuring out what the minimum viable humanoid robot looked like,” explains Apptronik CTO Nick Paine. “Iteration is then necessary to add complexity as needed to solve particular problems.” This robot is called Astra. It’s only an upper body, and it’s Apptronik’s first product, but (not having any legs) it’s designed for manipulation rather than dynamic locomotion. Astra is force controlled, with series-elastic torque-controlled actuators, giving it the compliance necessary to work in dynamic environments (and particularly around humans). “Astra is pretty unique,” says Paine. “What we were trying to do with the system is to approach and achieve human-level capability in terms of manipulation workspace and payload. This robot taught us a lot about manipulation and actually doing useful work in the world, so that’s why it’s where we wanted to start.” While Astra is currently out in the world doing pilot projects with clients (mostly in the logistics space), internally Apptronik has moved on to robots with legs. The following video, which Apptronik is sharing publicly for the first time, shows a robot that the company is calling its Quick Development Humanoid, or QDH: QDH builds on Astra by adding legs, along with a few extra degrees of freedom in the upper body to help with mobility and balance while simplifying the upper body for more basic manipulation capability. It uses only three different types of actuators, and everything (from structure to actuators to electronics to software) has been designed and built by Apptronik. “With QDH, we’re approaching minimum viable product from a usefulness standpoint,” says Paine, “and this is really what’s driving our development, both in software and hardware.” “What people have done in humanoid robotics is to basically take the same sort of architectures that have been used in industrial robotics and apply those to building what is in essence a multi-degree-of-freedom industrial robot,” adds Cardenas. “We’re thinking of new ways to build these systems, leveraging mass manufacturing techniques to allow us to develop a high-degree-of-freedom robot that’s as affordable as many industrial robots that are out there today.” Cardenas explains that a major driver for the cost of humanoid robots is the number of different parts, the precision machining of some specific parts, and the resulting time and effort it then takes to put these robots together. As an internal-controls test bed, QDH has helped Apptronik to explore how it can switch to less complex parts and lower the total part count. The plan for Apollo is to not use any high-precision or proprietary components at all, which mitigates many supply-chain issues and will help Apptronik reach its target price point for the robot. Apollo will be a completely new robot, based around the lessons Apptronik has learned from QDH. It’ll be average human size: about 1.75 meters tall, weighing around 75 kilograms, with the ability to lift 25 kg. It’s designed to operate untethered, either indoors or outdoors. Broadly, Apptronik is positioning Apollo as a high-performance, easy-to-use, and versatile robot that can do a bunch of different things. It is imagining an “iPhone of robots,” where apps can be created for the robot to perform specific tasks. To extend the iPhone metaphor, Apptronik itself will make sure that Apollo can do all of the basics (such as locomotion and manipulation) so that it has fundamental value, but the company sees versatility as the way to get to large-scale deployments and the cost savings that come with them. “I see the Apollo robot as a spiritual successor to Valkyrie. It’s not Valkyrie 2—Apollo is its own platform, but we’re working with Apptronik to adapt it as much as we can to space use cases.”—Shaun Azimi, NASA Johnson Space Center The challenge with this app approach is that there’s a critical mass that’s required to get it to work—after all, the primary motivation to develop an iPhone app is that there are a bajillion iPhones out there already. Apptronik is hoping that there are enough basic manipulation tasks in the supply-chain space that Apollo can leverage to scale to that critical-mass point. “This is a huge opportunity where the tasks that you need a robot to do are pretty straightforward,” Cardenas tells us. “Picking single items, moving things with two hands, and other manipulation tasks where industrial automation only gets you to a certain point. These companies have a huge labor challenge—they’re missing labor across every part of their business.” While Apptronik’s goal is for Apollo to be autonomous, in the short to medium term, its approach will be hybrid autonomy, with a human overseeing first a few and eventually a lot of Apollos with the ability to step in and provide direct guidance through teleoperation when necessary. “That’s really where there’s a lot of business opportunity,” says Paine. Cardenas agrees. “I came into this thinking that we’d need to make Rosie the robot before we could have a successful commercial product. But I think the bar is much lower than that. There are fairly simple tasks that we can enter the market with, and then as we mature our controls and software, we can graduate to more complicated tasks.” Apptronik is still keeping details about Apollo’s design under wraps, for now. We were shown renderings of the robot, but Apptronik is understandably hesitant to make those public, since the design of the robot may change. It does have a firm date for unveiling Apollo for the first time: SXSW, which takes place in Austin in March.

  • IEEE Discusses 6 Simple Solutions to Climate Change at COP27
    by Kathy Pretz on 26. January 2023. at 19:00

    Simple, effective solutions that can help lessen the impact of climate change already exist. Some of them still need to be implemented, though, while others need to be improved. That’s according to 2023 IEEE President Saifur Rahman, who was among the speakers from engineering organizations at the COP27 event held in Egypt in November. The IEEE Life Fellow spoke during a session addressing the role of technology in delivering an equitable, sustainable, and low-carbon resilient world. Rahman, a power expert and professor of electrical and computer engineering at Virginia Tech, is the former chair of the IEEE ad hoc committee on climate change. The committee was formed last year to coordinate the organization’s response to the global crisis. About one-third of emissions globally are produced through electricity generation, and Rahman said his mission is to help reduce that amount through engineering solutions. At COP27, he said that even though the first legally binding international treaty on climate change, known as the Paris Agreement, was adopted nearly a decade ago, countries have yet to come to a consensus on how to stop burning fossil fuels, among other issues. Some continue to burn coal, for example, because there are no other economically feasible choices for them. “We as technologists from IEEE say, ‘If you keep to your positions, you’ll never get an agreement,’” he said. “We have come to offer this six-point portfolio of solutions that everybody can live with. We want to be a solution partner so we can have parties at the table to help solve this problem of high carbon emissions globally.” The solutions Rahman outlined were the use of proven methods that reduce electricity usage, making coal plants more efficient, using hydrogen and other storage solutions, promoting more renewables, installing new types of nuclear reactors, and encouraging cross-border power transfers. Energy-saving tips One action is to use less electricity, Rahman said, noting that dimming lights by 20 percent in homes, office buildings, hotels, and schools could save 10 percent of electricity. Most people wouldn’t even notice the difference in brightness, he said. Another is switching to LEDs, which use at least 75 percent less energy than incandescent bulbs. LED bulbs cost about five times more, but they last longer, he said. He called on developed countries to provide financial assistance to developing nations to help them replace all their incandescent bulbs with LEDs. Another energy-saving measure is to raise the temperature of air conditioners by 2 °C. This could save 10 percent of electricity as well, Rahman. By better controlling lighting, heating, and cooling, 20 percent of energy could be saved without causing anyone to suffer, he said. Efficient coal-burning plants Shutting down coal power plants completely is unlikely to happen anytime soon, he predicted, especially since many countries are building new ones that have 40-year life spans. Countries that continue to burn coal should do so in high-efficiency power plants, he said. One type is the ultrasupercritical coal-fired steam power plant. Conventional coal-fired plants, which make water boil to generate steam that activates a turbine, have an efficiency of about 38 percent. Ultrasupercritical plants operate at temperatures and pressures at which the liquid and gas phases of water coexist in equilibrium. It results in higher efficiencies: about 46 percent. Rahman cited the Eemshaven ultrasupercritical plant, in Groningen, Netherlands—which was built in 2014. Another efficient option he pointed out is the combined cycle power plant. In its first stage, natural gas is burned in a turbine to make electricity. The heat from the turbine’s exhaust is used to produce steam to turn a turbine in the second stage. The resulting two-stage power plant is at least 25 percent more efficient than a single-stage plant. “IEEE wants to be a solution partner, not a complaining partner, so we can have both parties at the table to help solve this problem of high carbon emissions globally.” Another method to make coal-fired power plants more environmentally friendly is to capture the exhausted carbon dioxide and store it in the ground, Rahman said. Such carbon-capture systems are being used in some locations, but he acknowledges that the carbon sequestration process is too expensive for some countries. Integrating and storing grid and off-grid energy To properly balance electricity supply and demand on the power grid, renewables should be integrated into energy generation, transmission, and distribution systems from the very start, Rahman said. He added that the energy from wind, solar, and hydroelectric plants should be stored in batteries so the electricity generated from them during off-peak hours isn’t wasted but integrated into energy grids. He also said low-cost, low-carbon hydrogen fuel should be considered as part of the renewable energy mix. The fuel can be used to power cars, supply electricity, and heat homes, all with zero carbon emissions. “Hydrogen would help emerging economies meet their climate goals, lower their costs, and make their energy grid more resilient,” he said. Smaller nuclear power plants Rahman conceded there’s a stigma that surrounds nuclear power plants because of accidents at Chernobyl, Fukushima, Three Mile Island, and elsewhere. But, he said, without nuclear power, the concept of becoming carbon neutral by 2050 isn’t realistic. “It’s not possible in the next 25 years except with nuclear power,” he said. “We don’t have enough solar energy and wind energy.” Small modular reactors could replace traditional nuclear power plants. SMRs are easier and less expensive to build, and they’re safer than today’s large nuclear plants, Rahman said. Though small, SMRs are powerful. They have an output of up to 300 megawatts of electricity, or about a quarter of the size of today’s typical nuclear plant. The modular reactors are assembled in factories and shipped to their ultimate location, instead of being built onsite. And unlike traditional nuclear facilities, SMRs don’t need to be located near large bodies of water to handle the waste heat discharge. SMRs have not taken off, Rahman says, because of licensing and technical issues. Electricity transfer across national borders Rahman emphasized the need for more cross-border power transfers, as few countries have enough electricity to supply to all their citizens. Many countries already do so. “The United States buys power from Canada. France sells energy to Italy, Spain, and Switzerland,” Rahman said. “The whole world is one grid. You cannot transition from coal to solar and vice versa unless you transfer power back and forth.” Free research on climate change During the conference session, Rahman said an IEEE collection of 7,000 papers related to climate change is accessible from the IEEE Xplore Digital Library. IEEE also launched a website that houses additional resources. None of the solutions IEEE proposed are new or untested, Rahman said, but his goal is to “provide a portfolio of solutions acceptable to and deployable in both the emerging economies and the developed countries—which will allow them to sit at the table together and see how much carbon emission can be saved by creative application of already available technologies so that both parties win at the end of the day.”

  • Play Infinite Versions of AI-Generated Pong on the Go
    by Jose Antonio Garcia Peiro on 25. January 2023. at 16:00

    There is currently a lot of interest in AI tools designed to help programmers write software. GitHub’s Copilot and Amazon’s CodeWhisperer apply deep-learning techniques originally developed for generating natural-language text by adapting it to generate source code. The idea is that programmers can use these tools as a kind of auto-complete on steroids, using prompts to produce chunks of code that developers can integrate into their software. Looking at these tools, I wondered: Could we take the next step and take the human programmer out of the loop? Could a working program be written and deployed on demand with just the touch of a button? In my day job, I write embedded software for microcontrollers, so I immediately thought of a self-contained handheld device as a demo platform. A screen and a few controls would allow the user to request and interact with simple AI-generated software. And so was born the idea of infinite Pong. I chose Pong for a number of reasons. The gameplay is simple, famously explained on Atari’s original 1972 Pong arcade cabinet in a triumph of succinctness: “Avoid missing ball for high score.” An up button and a down button is all that’s needed to play. As with many classic Atari games created in the 1970s and 1980s, Pong can be written in a relatively few lines of code, and has been implemented as a programming exercise many, many times. This means that the source-code repositories ingested as training data for the AI tools are rich in Pong examples, increasing the likelihood of getting viable results. I used a US $6 Raspberry Pi Pico W as the core of my handheld device—its built-in wireless allows direct connectivity to cloud-based AI tools. To this I mounted a $9 Pico LCD 1.14 display module. Its 240 x 135 color pixels is ample for Pong, and the module integrates two buttons and a two-axis micro joystick. My choice of programming language for the Pico was MicroPython, because it is what I normally use and because it is an interpreted- language code that can be run without the need of a PC-based compiler. The AI coding tool I used was the OpenAI Codex. The OpenAI Codex can be accessed via an API that responds to queries using the Web’s HTTP format, which are straightforward to construct and send using the urequests and ujson libraries available for MicroPython. Using the OpenAI Codex API is free during the current beta period, but registration is required and queries are limited to 20 per minute—still more than enough to accommodate even the most fanatical Pong jockey. Only two hardware modules are needed–a Rasperry Pi Pico W [bottom left] that supplies the compute power and a plug-in board with a screen and simple controls [top left]. Nothing else is needed except a USB cable to supply power.James Provost The next step was to create a container program. This program is responsible for detecting when a new version of Pong is requested via a button push and when it, sends a prompt to the OpenAI Codex, receives the results, and launches the game. The container program also sets up a hardware abstraction layer, which handles the physical connection between the Pico and the LCD/control module. The most critical element of the whole project was creating the prompt that is transmitted to the OpenAI Codex every time we want it to spit out a new version of Pong. The prompt is a chunk of plain text with the barest skeleton of source code—a few lines outlining a structure common to many video games, namely a list of libraries we’d like to use, and a call to process events (such as keypresses), a call to update the game state based on those events, and a call to display the updated state on the screen. The code that comes back produces a workable Pong game about 80 percent of the time. How to use those libraries and fill out the calls is up to the AI. The key to turning this generic structure into a Pong game are the embedded comments—optional in source code written by humans, really useful in prompts. The comments describe the gameplay in plain English—for example, “The game includes the following classes…Ball: This class represents the ball. It has a position, a velocity, and a debug attributes [sic]. Pong: This class represents the game itself. It has two paddles and a ball. It knows how to check when the game is over.” (My container and prompt code are available on Hackaday.io) (Go to Hackaday.io to play an infinite number of Pong games with the Raspberry Pi Pico W; my container and prompt code are on the site.) What comes back from the AI is about 300 lines of code. In my early attempts the code would fail to display the game because the version of the MicroPython framebuffer library that works with my module is different from the framebuffer libraries the OpenAI Codex was trained on. The solution was to add the descriptions of the methods my library uses as prompt comments, for example: “def rectangle(self, x, y, w, h, c).” Another issue was that many of the training examples used global variables, whereas my initial prompt defined variables as attributes scoped to live inside individual classes, which is generally a better practice. I eventually had to give up, go with the flow, and declare my variables as global. The variations of Pong created by the OpenAI Codex vary widely in ball and paddle size and color and how scores are displayed. Sometimes the code results in an unplayable game, such as at the bottom right corner, where the player paddles have been placed on top of each other.James Provost The code that comes back from my current prompt produces a workable Pong game about 80 percent of the time. Sometimes the game doesn’t work at all, and sometimes it produces something that runs but isn’t quite Pong, such as when it allows the paddles to be moved left and right in addition to up and down. Sometimes it’s two human players, and other times you play against the machine. Since it is not specified in the prompt, Codex takes either of the two options. When you play against the machine, it’s always interesting to see how Codex has implemented that part of code logic. So who is the author of this code? Certainly there are legal disputes stemming from, for example, how this code should be licensed, as much of the training set is based on open-source software that imposes specific licensing conditions on code derived from it. But licenses and ownership are separate from authorship, and with regard to the latter I believe it belongs to the programmer who uses the AI tool and verifies the results, as would be the case if you created artwork with a painting program made by a company and used their brushes and filters. As for my project, the next step is to look at more complex games. The 1986 arcade hit Arkanoid on demand, anyone?

  • Powering Offshore Wind Farms With Numerical Modeling of Subsea Cables
    by Brianne Christopher on 25. January 2023. at 13:00

    This sponsored article is brought to you by COMSOL. “Laws, Whitehouse received five minutes signal. Coil signals too weak to relay. Try drive slow and regular. I have put intermediate pulley. Reply by coils.” Sound familiar? The message above was sent through the first transatlantic telegraph cable between Newfoundland and Ireland, way back in 1858. (“Whitehouse” refers to the chief electrician of the Atlantic Telegraph Company at the time, Wildman Whitehouse.) Fast forward to 2014: The bottom of the ocean is home to nearly 300 communications cables, connecting countries and providing internet communications around the world. Fast forward again: As of 2021, there are an estimated 1.3 million km of submarine cables (Figure 1) in service, ranging from a short 131 km cable between Ireland and the U.K. to the 20,000 km cable that connects Asia with North America and South America. We know what the world of submarine cables looks like today, but what about the future? Moving Wind Power Offshore The offshore wind (OFW) industry is one of the most rapidly advancing sources of power around the world. It makes sense: Wind is stronger and more consistent over the open ocean than it is on land. Some wind farms are capable of powering 500,000 homes or more. Currently, Europe leads the market, making up almost 80 percent of OFW capacity. However, the worldwide demand for energy is expected to increase by 20 percent in 10 years, with a large majority of that demand supplied by sustainable energy sources like wind power. Offshore wind farms (Figure 2) are made up of networks of turbines. These networks include cables that connect wind farms to the shore and supply electricity to our power grid infrastructure (Figure 3). Many OFW farms are made up of grounded structures, like monopiles and other types of bottom-fixed wind turbines. The foundations for these structures are expensive to construct and difficult to install in deep sea environments, as the cables have to be buried in the seafloor. Installation and maintenance is easier to accomplish in shallow waters. Wind turbines for offshore wind farms are starting to be built further out into the ocean. This creates a new need for well-designed subsea cables that can reach longer distances, survive in deeper waters, and better connect our world with sustainable power. The future of offshore wind lies in wind farms that float on ballasts and moorings, with the cables laid directly on the seafloor. Floating wind farms are a great solution when wind farms situated just off the coast grow crowded. They can also take advantage of the bigger and more powerful winds that occur further out to sea. Floating wind farms are expected to grow more popular over the next decade. This is an especially attractive option for areas like the Pacific Coast of the United States and the Mediterranean, where the shores are deeper, as opposed to the shallow waters of the Atlantic Coast of the U.S., U.K., and Norway. One important requirement of floating OFW farms is the installation of dynamic, high-capacity submarine cables that are able to effectively harness and deliver the generated electricity to our shores. Design Factors for Resilient Subsea Cables Ever experienced slower than usual internet? Failure of a subsea cable may be to blame. Cable failures of this kind are a common — and expensive — occurrence, whether from the damage of mechanical stress and strain caused by bedrock, fishing trawlers, anchors, and problems with the cable design itself. As the offshore wind industry continues to grow, our need to develop power cables that can safely and efficiently connect these farms to our power grid grows as well. Before fixing or installing a submarine cable, which can cost billions of dollars, cable designers have to ensure that designs will perform as intended in undersea conditions. Today, this is typically done with the help of computational electromagnetics modeling. To validate cable simulation results, international standards are used, but these standards have not been able to keep up with recent advancements in computational power and the simulation software’s growing capabilities. Hellenic Cables, including its subsidiary FULGOR, use the finite element method (FEM) to analyze their cable designs and compare them to experimental measurements, often getting better results than what the international standards can offer. Updated Methodology for Calculating Cable Losses The International Electrotechnical Commission (IEC) provides standards for electrical cables, including Standard 60287 1-1 for calculating cable losses and current ratings. One problem with the formulation used in Standard 60287 is that it overestimates cable losses — especially the losses in the armor of three-core (3C) submarine cables. Cable designers are forced to adopt a new methodology for performing these analyses, and the team at Hellenic Cables recognizes this. “With a more accurate and realistic model, significant optimization margins are expected,” says Dimitrios Chatzipetros, team leader of the Numerical Analysis group at Hellenic Cables. The new methodology will enable engineers to reduce cable cross sections, thereby reducing their costs, which is the paramount goal for cable manufacturing. An electric cable is a complex device to model. The geometric structure consists of three main power cores that are helically twisted with a particular lay length, and hundreds of additional wires — screen or armor wires — that are twisted with a second or third lay length. This makes it difficult to generate the mesh and solve for the electromagnetic fields. “This is a tedious 3D problem with challenging material properties, because some of the elements are ferromagnetic,” says Andreas Chrysochos, associate principal engineer in the R&D department of Hellenic Cables. In recent years, FEM has made a giant leap when it comes to cable analysis. The Hellenic Cables team first used FEM to model a full cable section of around 30 to 40 meters in length. This turned out to be a huge numerical challenge that can only realistically be solved on a supercomputer. By switching to periodic models with a periodic length equal to the cable’s cross pitch, the team reduced the problem from 40 meters down to 2–4 meters. Then they introduced short-twisted periodicity, which reduces the periodic length of the model from meters to centimeters, making it much lighter to solve. “The progress was tremendous,” says Chrysochos. (Figure 4) Although the improvements that FEM brings to cable analysis are great, Hellenic Cables still needs to convince its clients that their validated results are more realistic than those provided by the current IEC standard. Clients are often already aware of the fact that IEC 60287 overestimates cable losses, but results visualization and comparison to actual measurements can build confidence in project stakeholders. (Figure 5) Finite Element Modeling of Cable Systems Electromagnetic interference (EMI) presents several challenges when it comes to designing cable systems — especially the capacitive and inductive couplings between cable conductors and sheaths. For one, when calculating current ratings, engineers need to account for power losses in the cable sheaths during normal operation. In addition, the overvoltages on cable sheaths need to be within acceptable limits to meet typical health and safety standards. As Chrysochos et al. discuss in “Capacitive and Inductive Coupling in Cable Systems – Comparative Study between Calculation Methods” (Ref. 3), there are three main approaches when it comes to calculating these capacitive and inductive couplings. The first is the complex impedance method (CIM), which calculates the cable system’s currents and voltages while neglecting its capacitive currents. This method also assumes that the earth return path is represented by an equivalent conductor. Another common method is electromagnetic transients program (EMT) software, which can be used to analyze electromagnetic transients in power systems using both time- and frequency-domain models. The third method, FEM, is the foundation of the COMSOL Multiphysics software. The Hellenic Cables team used COMSOL Multiphysics and the add-on AC/DC Module to compute the electric fields, currents, and potential distribution in conducting media. “The AC/DC Module and solvers behind it are very robust and efficient for these types of problems,” says Chrysochos. The Hellenic Cables team compared the three methods — CIM, EMT software, and FEM (with COMSOL Multiphysics) — when analyzing an underground cable system with an 87/150 kV nominal voltage and 1000 mm2 cross section (Figure 6). They modeled the magnetic field and induced current density distributions in and around the cable system’s conductors, accounting for the bonding type with an external electrical circuit. The results between all three methods show good agreement for the cable system for three different configurations: solid bonding, single-point bonding, and cross bonding (Figure 7). This demonstrates that FEM can be applied to all types of cable configurations and installations when taking into account both capacitive and inductive coupling. The Hellenic Cables team also used FEM to study thermal effects in subsea cables, such as HVAC submarine cables for offshore wind farms, as described in “Review of the Accuracy of Single Core Equivalent Thermal Model for Offshore Wind Farm Cables” (Ref. 4). The current IEC Standard 60287 1-1 includes a thermal model, and the team used FEM to identify its weak spots and improve its accuracy. First, they validated the current IEC model with finite element analysis. They found that the current standards do not account for the thermal impact of the cable system’s metallic screen materials, which means that the temperature can be underestimated by up to 8°C. Deriving analytical, correcting formulas based on several FEM models, the team reduced this discrepancy to 1°C! Their analysis also highlights significant discrepancies between the standard and the FEM model, especially when the corresponding sheath thickness is small, the sheath thermal conductivity is high, and the power core is large. This issue is particularly important for OFW projects, as the cables involved are expected to grow larger and larger. Further Research into Cable Designs In addition to studying inductive and capacitive coupling and thermal effects, the Hellenic Cables team evaluated other aspects of cable system designs, including losses, thermal resistance of surrounding soil, and grounding resistance, using FEM and COMSOL Multiphysics. “In general, COMSOL Multiphysics is much more user friendly and efficient, such as when introducing temperature-dependent losses in the cable, or when presenting semi-infinite soil and infinite element domains. We found several ways to verify what we already know about cables, their thermal performance, and loss calculation,” says Chatzipetros. Losses The conductor size of a subsea or terrestrial cable affects the cost of the cable system. This is often a crucial aspect of an offshore wind farm project. To optimize the conductor size, designers need to be able to accurately determine the cable’s losses. To do so, they first turned to temperature. Currents induced in a cable’s magnetic sheaths yield extra losses, which contribute to the temperature rise of the conductor. When calculating cable losses, the current IEC standard does not consider proximity effects in sheath losses. If cable cores are in close proximity (say, for a wind farm 3C cable), the accuracy of the loss calculation is reduced. Using FEM, the Hellenic Cables team was able to study how conductor proximity effects influence losses generated in sheaths in submarine cables with lead-sheathed cores and a nonmagnetic armor. They then compared the IEC standard with the results from the finite element analysis, which showed better agreement with measured values from an experimental setup (Figure 8). This research was discussed in the paper “Induced Losses in Non-Magnetically Armoured HVAC Windfarm Export Cables” (Ref. 5). Thermal Resistance of Soil Different soil types have different thermal insulating characteristics, which can severely limit the amount of heat dissipated from the cable, thereby reducing its current-carrying capacity. This means that larger conductor sizes are needed to transmit the same amount of power in areas with more thermally adverse soil, causing the cable’s cost to increase. In the paper “Rigorous calculation of external thermal resistance in non-uniform soils” (Ref. 6), the Hellenic Cables team used FEM to calculate the effective soil thermal resistance for different cable types and cable installation scenarios (Figure 9). First, they solved for the heat transfer problem under steady-state conditions with arbitrary temperatures at the cable and soil surfaces. They then evaluated the effective thermal resistance based on the heat dissipated by the cable surface into the surrounding soil. Simulations were performed for two types of cables: a typical SL-type submarine cable with 87/150 kV, a 1000 mm2 cross section, and copper conductors, as well as a typical terrestrial cable with 87/150 kV, a 1200 mm2 cross section, and aluminum conductors. The team analyzed three different cable installation scenarios (Figure 10). The first scenario is when a cable is installed beneath a horizontal layer, such as when sand waves are expected to gradually add to the seafloor’s initial level after installation. The second is when a cable is installed within a horizontal layer, which occurs when the installation takes place in a region with horizontal directional drilling (HDD). The third scenario is when a cable is installed within a backfilled trench, typical for regions with unfavorable thermal behavior, in order to reduce the impact of the soil on the temperature rise of the cable. The numerical modeling results prove that FEM can be applied to any material or shape of multilayer or backfilled soil, and that the method is compatible with the current rating methodology in IEC Standard 60287. Grounding Resistance The evaluation of grounding resistance is important to ensure the integrity and secure operation of cable sheath voltage limiters (SVLs) when subject to earth potential rise (EPR). In order to calculate grounding resistance, engineers need to know the soil resistivity for the problem at hand and have a robust calculation method, like FEM. The Hellenic Cables team used FEM to analyze soil resistivity for two sites: one in northern Germany and one in southern Greece. As described in the paper “Evaluation of Grounding Resistance and Its Effect on Underground Cable Systems” (Ref. 7), they found that the apparent resistivity of the soil is a monotonic function of distance, and that a two-layer soil model is sufficient for their modeling problem (Figure 11). After finding the resistivity, the team calculated the grounding resistance for a single-rod scenario (as a means of validation). After that, they proceeded with a complex grid, which is typical of cable joint pits found in OWFs. For both scenarios, they found the EPR at the substations and transition joint pit, as well as the maximum voltage between the cable sheath and local earth (Figure 12). The results demonstrate that FEM is a highly accurate calculation method for grounding resistance, as they show good agreement with both numerical data from measurements and electromagnetic transient software calculations (Figure 13). A Bright and Windy Future The Hellenic Cables team plans to continue the important work of further improving all of the cable models they have developed. The team has also performed research into HVDC cables, which involve XLPE insulation and voltage source converter (VSC) technology. HVDC cables can be more cost efficient for systems installed over long distances. Like the wind used to power offshore wind farms, electrical cable systems are all around us. Even though we cannot always see them, they are working hard to ensure we have access to a high-powered and well-connected world. Optimizing the designs of subsea and terrestrial cables is an important part of building a sustainable future. References M. Hatlo, E. Olsen, R. Stølan, J. Karlstrand, “Accurate analytic formula for calculation of losses in three-core submarine cables,” Jicable, 2015. S. Sturm, A. Küchler, J. Paulus, R. Stølan, F. Berger, “3D-FEM modelling of losses in armoured submarine power cables and comparison with measurements,” CIGRE Session 48, 2020. A.I. Chrysochos et al., “Capacitive and Inductive Coupling in Cable Systems – Comparative Study between Calculation Methods”, 10th International Conference on Insulated Power Cables, Jicable, 2019. D. Chatzipetros and J.A. Pilgrim, “Review of the Accuracy of Single Core Equivalent Thermal Model for Offshore Wind Farm Cables”, IEEE Transactions on Power Delivery, Vol. 33, No. 4, pp. 1913–1921, 2018. D. Chatzipetros and J.A. Pilgrim, “Induced Losses in Non-Magnetically Armoured HVAC Windfarm Export Cables”, IEEE International Conference on High Voltage Engineering and Application (ICHVE), 2018. A.I. Chrysochos et al., “Rigorous calculation of external thermal resistance in non-uniform soils”, Cigré Session 48, 2020. A.I. Chrysochos et al., “Evaluation of Grounding Resistance and Its Effect on Underground Cable Systems”, Mediterranean Conference on Power Generation, Transmission , Distribution and Energy Conversion, 2020.

  • Meet the Members Running for 2024 IEEE President-Elect
    by Joanna Goodrich on 24. January 2023. at 19:00

    The IEEE Board of Directors has nominated Life Fellow Roger Fujii and Senior Member Kathleen Kramer as candidates for IEEE president-elect. The winner of this year’s election will serve as IEEE president in 2025. For more information about the election, president-elect candidates, and petition process, visit the IEEE election website. Life Fellow Roger Fujii Joey Ikemoto Nominated by the IEEE Board of Directors Fujii is president of Fujii Systems of Rancho Palos Verdes, Calif., which designs critical systems. Before starting his company, Fujii was vice president at Northrop Grumman’s engineering division in San Diego. His area of expertise is certifying critical systems. He has been a guest lecturer at California State University, the University of California, and Xiamen University. An active IEEE volunteer, Fujii most recently chaired the IEEE financial transparency reporting committee and the IEEE ad hoc committee on IEEE in 2050. The ad hoc committee envisioned scenarios to gain a global perspective of what the world might look like in 2050 and beyond and what potential futures might mean for IEEE. He was 2016 president of the IEEE Computer Society, 2021 vice president of the IEEE Technical Activities Board, and 2012–2014 director of Division VIII. Fujii received the 2020 Richard E. Merwin Award, the IEEE Computer Society’s highest-level volunteer service award. Senior Member Kathleen Kramer JT MacMillan Nominated by the IEEE Board of Directors Kramer is a professor of electrical engineering at the University of San Diego, where she served as chair of the EE department and director of engineering from 2004 to 2013. As director she provided academic leadership for engineering programs and developed new programs. Her areas of interest include multisensor data fusion, intelligent systems, and cybersecurity in aerospace systems. She has written or coauthored more than 100 publications. Kramer has worked for several companies including Bell Communications Research, Hewlett-Packard, and Viasat. She is a distinguished lecturer for the IEEE Aerospace and Electronic Systems Society and has given talks on signal processing, multisensor data fusion, and neural systems. She leads the society’s technical panel on cybersecurity. Kramer earned bachelor’s degrees in electrical engineering and physics in 1986 from Loyola Marymount University, in Los Angeles. She earned master’s and doctoral degrees in EE in 1991 from Caltech.

  • Portable Life-Support Device Provides Critical Care in Conflict and Disaster Zones
    by LEMO on 24. January 2023. at 15:37

    This is a sponsored article brought to you by LEMO. A bomb explodes — medical devices set to action. It is only in war that both sides of human ingenuity coexist so brutally. On the one side, it innovates to wound and kill, on the other it heals and saves lives. Side by side, but viscerally opposed. Dr. Joe Fisher is devoted to the light side of human ingenuity, medicine. His research at Toronto’s University Health Network has made major breakthroughs in understanding the absorption and use of oxygen by the body. Then, based on the results, he developed new, highly efficient methods of delivering oxygen to patients. In 2004, together with other physicians and engineers, he created a company to develop solutions based on his innovations. He named it after the Toronto neighborhood where he still lives — Thornhill Medical. Meanwhile, the studies conducted by Dr. Fisher started drawing attention from the U.S. Marines. They had been looking for solutions to reduce the use of large, heavy, and potentially explosive oxygen tanks transported by their medical teams to military operation sites. “At first, they asked us if we could prove that it was possible to ventilate patients using much less oxygen,” says Veso Tijanic, COO of Thornhill Medical. “We proved it. Then, they asked us whether we could develop a device for this. Finally, whether we could integrate other functionalities into this device.” The device is currently saving lives in Ukraine, Thornhill Medical having donated a number of them as well as its mobile anesthesia delivery module MADM. These back-and-forths lasted about five years, gradually combining science and technology. It resulted in a very first product, launched in 2011: MOVES, an innovative portable life support unit. This cooperation has also deeply transformed Thornhill Medical. “We used to see ourselves as an R&D laboratory, we have now also become a medical device manufacturer!” says Tijanic. Whilst the U.S. Marines started using MOVES, Thornhill Medical continued to innovate. In 2017, it launched an enhanced version, MOVES SLC. Today, the Canadian company employs a staff of about 70. It continues to do research and development with its own team and partners around the world, publishing regularly in scientific journals. It has sold MOVES SLC around the world and launched two other solutions, MADM and ClearMate. MADM is a portable device (capable of functioning on extreme terrain) which connects to any ventilator to deliver gas anaesthesia. ClearMate is an instrument — also portable and without electricity — which allows to take quick action in case of carbon monoxide poisoning. This is the most common respiratory poisoning, where every second without treatment worsens consequences on the brain and other organs. An innovative ventilator design Just like these two products, the heart of MOVES SLC is a technology stemming directly from Dr. Fisher’s research in breathing sciences. It includes a ventilator operating in circle-circuit: It recovers the oxygen expired by the patient, carefully controls its concentration (high FiO2) and redistributes only the strict minimum to the patient. MOVES SLC operates with significantly less oxygen than required by traditional open-circuit ventilators. This is so little that a small oxygen-concentrator — integrated into MOVES SLC, that extracts oxygen from ambient air — is sufficient. No need for supplies from large oxygen tanks. Yet, MOVES SLC is more than an innovative ultra-efficient ventilator, says Tijanic: “It is a complete life support device.” In addition to its integrated oxygen concentrator, it also includes suction and several sensors that monitor vital signs and brings it all together via a unique interface that can be operated on the device or by a mobile touch screen. The user can intubate a patient and monitor its ventilation (FiO2, ETCO2, SpO2, ABP and other indicators) in addition to the patient’s temperature (two sensors), blood pressure (internal and external) and 12-lead ECG. The evolution of these measurements can be followed over the last 24 hours. All of this, in a device measuring only 84 cm x 14 cm x 25 cm, weighing about 21 kilograms (including interchangeable batteries) which can be slung across the shoulder. “MOVES must function in the middle of military operations, and be resistant to vibrations, crashes and shock, continue operating smoothly in sandstorms or in the rain.”—Veso Tijanic, COO of Thornhill Medical “MOVES SLC represents no more than 30 percent of the volume and weight of traditional equipment — ventilator, concentrator, suction, monitoring device,” adds the COO. Integrating various technologies in such a lightweight, compact package was, without surprise, a major challenge for the engineers. Still, not the most difficult one. Making medical device components capable of withstanding extreme conditions will have been even more complex. “Traditional technologies were designed to function in hospitals,” explains Tijanic. “MOVES must function in the middle of military operations, and be resistant to vibrations, crashes and shock, continue operating smoothly in sandstorms or in the rain, in temperatures between -26°C and +54°C.” Sometimes, the engineers could take existing components and develop protective features for them. Occasionally, they would recast them from different markets (oxygen sensors, for instance) to integrate them into their device. And in other cases, they had to start from scratch, creating their own robust components. Military-grade ruggedness The challenge was successfully overcome: “MOVES is designed under the highest industry standards and has been tested and fully certified by various regulatory bodies.” It has been certified MIL-STD-810G, a ruggedness U.S. military standard, verified by over twenty different tests (acoustic vibration, explosive atmosphere, etc.). The device is hence approved for use — not only transported, but actually used on a patient — in various helicopters, aircraft and land vehicles. And this makes a world of difference for Tijanic. “Critical care, such as we provide, normally requires specially equipped facilities or vehicles. With MOVES SLC, any place or vehicle — even civilian — of sufficient size, is an opportunity for treatment.” Thornhill’s fully integrated mobile life support has been used by military medical teams for five years already. The device is currently saving lives in Ukraine, Thornhill Medical having donated a number of them as well as its mobile anesthesia delivery module MADM. An Introduction to MOVES SLC In July 2022, the U.S. Army published a report summarizing its medical modernization strategy. The 22-page report confirms the need for ever more lightweight, compact, and cost-effective technology. It also mentions the use of artificial intelligence for more autonomous monitoring of the patients’ medical condition. Thornhill is exploring the AI angle. “There isn’t always a qualified expert available everywhere,” explains Tijanic. “AI could ensure the optimum settings of the device, and then modify these depending on how the patient’s condition evolves.” Thornhill is also exploring another solution for cases where no experts are available on spot. Last April, a MOVES SLC was used in a demonstration of “remote control of ventilators and infusion pumps to support disaster care.” Operators based in Seattle successfully controlled remotely a device based in Toronto. Science-fiction thus becomes science, and turns into reality. The Canadian company continues innovating to heal and save lives on rough chaotic terrain and in the most extreme and unpredictable circumstances. It is driven by medical and technological progress. It is also driven by a many-thousand-year-old trend: Humans will likely never stop waging war.

  • Convincing Consumers To Buy EVs
    by Robert N. Charette on 23. January 2023. at 21:13

    With the combination of requiring all new light-duty vehicles sold in New York State be zero-emission by 2035, investments in electric vehicles charging stations, and state and federal EV rebates, “you’re going to see that you have no more excuses” for not buying an EV, according to New York Governor Kathy Hochul. The EV Transition Explained This is the tenth in a series of articles exploring the major technological and social challenges that must be addressed as we move from vehicles with internal-combustion engines to electric vehicles at scale. In reviewing each article, readers should bear in mind Nobel Prize–winning physicist Richard Feynman’s admonition: “For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.” Perhaps, but getting the vast majority of 111 million US households who own one or more light duty internal combustion vehicles to switch to EVs is going to take time. Even if interest in purchasing an EV is increasing, close to 70 percent of Americans are still leaning towards buying an ICE vehicles as their next purchase. In the UK, only 14 percent of drivers plan to purchase an EV as their next car. Even when there is an expressed interest in purchasing a battery electric or hybrid vehicle, it often did not turn into an actual purchase. A 2022 CarGurus survey found that 35 percent of new car buyers expressed an interest in purchasing a hybrid, but only 13 percent eventually did. Similarly, 22 percent expressed interest in a battery electric vehicle (BEV), but only 5 percent bought one. Each potential EV buyer assesses their individual needs against the benefits and risks an EV offers. However, until mainstream public confidence reaches the point where the perceived combination of risks of a battery electric vehicle purchase (range, affordability, reliability and behavioral changes) match that of an ICE vehicle, then EV purchases are going to be the exception rather than the norm. How much range is enough? Studies differ about how far drivers want to be able to go between charges. One Bloomberg study found 341 miles was the average range desired, while Deloitte Consulting’s 2022 Global Automotive Consumer Study found U.S. consumers want to be able to travel 518 miles on a fully charged battery in a BEV that costs $50,000 or less. Arguments over how much range is needed are contentious. There are some who argue that because 95 percent of American car trips are 30 miles or less, a battery range of 250 miles or less is all that is needed. They also point out that this would reduce the price of the EV, since batteries account for about 30 percent of an EVs total cost. In addition, using smaller batteries would allow more EVs to be built, and potentially relieve pressure on the battery supply chain. If longer trips are needed, well, “bring some patience and enjoy the charging experience” seems to be the general advice. While perhaps logical, these arguments are not going to influence typical buying decisions much. The first question potential EV buyers are going to ask themselves is, “Am I going to be paying more for a compromised version of mobility?” says Alexander Edwards, President of Strategic Vision, a research-based consultancy that aims to understand human behavior and decision-making. Driver’s side view of 2024 Chevrolet Equinox EV 3LT.Chevrolet Edwards explains potential customers do not have range anxiety per se: If they believe they require a vehicle that must go 400 miles before stopping, “even if once a month, once a quarter, or once a year,” all vehicles that cannot meet that criteria will be excluded from their buying decision. Range anxiety, therefore, is more a concern for EV owners. Edwards points out that regarding range, most BEV owners own at least one ICE vehicle to meet their long-distance driving needs. What exactly is the “range” of a BEV is itself becoming a heated point of contention. While ICE vehicles driving ranges are affected by weather and driving conditions, the effects are well-understood after decades of experience. This experience is lacking with non-EV owners. Extreme heat and cold negatively affect EV battery ranges and charging time, as do driving speeds and terrain. Peter Rawlinson serves as the CEO and CTO of Lucid.Lucid Some automakers are reticent to say how much range is affected under differing conditions. Others, like Ford’s CEO Jim Farley, freely admits, “If you’re pulling 10,000 pounds, an electric truck is not the right solution. And 95 percent of our customers tow more than 10,000 pounds.” GM, though, is promising it will meet heavier towing requirements with its 2024 Chevrolet Silverado EV. However, Lucid Group CEO Peter Rawlinson in a non-too subtle dig at both Ford and GM said, “The correct solution for an affordable pickup truck today is the internal combustion engine.” Ford’s Farley foresees that the heavy-duty truck segment will be sticking with ICE trucks for a while, as “it will probably go hydrogen fuel cell before it goes pure electric.” Many in the auto industry are warning that realistic BEV range numbers under varying conditions need to be widely published, else risk creating a backlash against EVs in general. Range risk concerns obviously are tightly coupled to EV charging availability. Most charging is assumed to take place at home, but this is not an option for many home or apartment tenants. Even those with homes, their garages may not be available for EV charging. Scarce and unreliable EV charging opportunities, as well as publicized EV road trip horror stories, adds to both the potential EV owners’ current perceived and real range satisfaction risk. EVs ain’t cheap Price is another EV purchase risk that is comparable to EV range. Buying a new car is the second most expensive purchase a consumer makes behind buying a house. Spending nearly 100 percent of an annual US median household income on an unfamiliar technology is not a minor financial ask. That is one reason why legacy automakers and EV start-ups are attempting to follow Tesla’s success in the luxury vehicle segment, spending much of their effort producing vehicles that are “above the median average annual US household income, let alone buyer in new car market,” Strategic Vision’s Edwards says. On top of the twenty or so luxury EVs already or soon to be on the market, Sony and Honda recently announced that they would be introducing yet another luxury EV in 2026. It is true that there are some EVs that will soon appear in the competitive price range of ICE vehicles like the low-end GM EV Equinox SUV presently priced around $30,000 with a 280-mile range. How long GM will be able to keep that price in the face of battery cost increases and inflationary pressure, is anyone’s guess. It has already started to increase the cost of its Chevrolet Bolt EVs, which it had slashed last year, “due to ongoing industry-related pricing pressures.” The Lucid Air’s price ranges from $90,000 to $200,000 depending on options.Lucid. Analysts believe Tesla intends to spark an EV price war before its competitors are ready for one. This could benefit consumers in the short-term, but could also have long-term downside consequences for the EV industry as a whole. Tesla fired its first shot over its competitors’ bows with a recently announced price cut from $65,990 to $52,990 for its basic Model Y, with a range of 330 miles. That makes the Model Y cost-competitive with Hyundai’s $45,500 IONIQ 5 e-SUV with 304 miles of range. Tesla’s pricing power could be hard to counter, at least in the short term. Ford’s cheapest F-150 Lightning Pro is now $57,869 compared to $41,769 a year ago due to what Ford says are “ongoing supply chain constraints, rising material costs and other market factors.” The entry level F-150 XL with an internal combustion engine has risen in the past year from about $29,990 to $33,695 currently. Carlos Tavares, CEO of Stellantis.Stellantis Automakers like Stellantis, freely acknowledge that EVs are too expensive for most buyers, with Stellantis CEO Carlos Tavares even warning that if average consumers can’t afford EVs as ICE vehicle sales are banned, “There is potential for social unrest.” However, other automakers like BMW are quite unabashed about going after the luxury market which it terms “white hot.” BMW’s CEO Oliver Zipse does say the company will not leave the “lower market segment,” which includes the battery electric iX1 xDrive30 that retails for A$82,900 in Australia and slightly lower elsewhere. It is not available in the United States. Mercedes-Benz CEO Ola Kallenius also believes luxury EVs will be a catalyst for greater EV adoption—eventually. But right now, 75 percent of its investment has been redirected at bringing luxury vehicles to market. The fact that luxury EVs are more profitable no doubt helps keep automakers focused on that market. Ford’s very popular Mustang Mach-E is having trouble maintaining profitability, for instance, which has forced Ford to raise its base price from $43,895 to $46,895. Even in the Chinese market where smaller EV sales are booming, profits are not. Strains on profitability for automakers and their suppliers may increase further as battery metals prices increase, warns data analysis company S&P Global Mobility. Jim Rowan, Volvo Cars’ CEO and President.Volvo Cars As a result, EVs are unlikely to match ICE vehicle prices (or profits) anytime soon even for smaller EV models, says Renault Group CEO Luca de Meo, because of the ever increasing cost of batteries. Mercedes Chief Technology Officer Marcus Schäfer agrees and does not see EV/ICE price parity “with the [battery] chemistry we have today.” Volvo CEO Jim Rowan, disagrees with both of them, however, seeing ICE-EV price parity coming by 2025-2026. Interestingly, a 2019 Massachusetts Institute of Technology (MIT) study predicted that as EVs became more widespread, battery prices would climb because the demand for lithium and other battery metals would rise sharply. As a result, the study indicated EV/ICE price parity was likely closer to 2030 with the expectation that new battery chemistries would be introduced by then. Many argue, however, that total cost of ownership (TCO) should be used as the EV purchase decision criterion rather than sticker price. Total cost of ownership of EVs is generally less than an ICE vehicle over its expected life since they have lower maintenance costs and electricity is less expensive per mile than gasoline, and tax incentives and rebates help a lot as well. However, how long it takes to hit the break-even point depends on many factors, like the cost differential of a comparable ICE vehicle, depreciation, taxes, insurance costs, the cost of electricity/petrol in a region, whether charging takes place at home, etc. And TCO rapidly loses it selling point appeal if electricity prices go up, however, as is happening in the UK and in Germany. Even if the total cost of ownership is lower for an EV, a potential EV customer may not be interested if meeting today’s monthly auto payments is difficult. Extra costs like needing to install a fast charger at home, which can add several thousand dollars more, or higher insurance costs, which could add an extra $500-$600 a year, may also be seen as buying impediment and can change the TCO equation. Reliability and other major tech risks To perhaps distract wary EV buyers from range and affordability issues, the automakers have focused their efforts on highlighting EV performance. Raymond Roth, a director at financial advisory firm Stout Risius Ross, observes among automakers, “There’s this arms race right now of best in class performance” being the dominant selling point. This “wow” experience is being pursued by every EV automaker. Mercedes CEO Kallenius, for example, says to convince its current luxury vehicle owners to an EV, “the experience for the customer in terms of the torque, the performance, everything [must be] fantastic.” Nissan, which seeks a more mass market buyer, runs commercials exclaiming, “Don’t get an EV for the ‘E’, but because it will pin you in your seat, sparks your imagination and takes your breath away.” Ford believes it will earn $20 billion, Stellantis some $22.5 billion and GM $20 to $25 billion from paid software-enabled vehicle features by 2030. EV reliability issues may also take one’s breath away. Reliability is “extremely important” to new-car buyers, according to a 2022 report from Consumer Reports (CR). Currently, EV reliability is nothing to brag about. CR’s report says that “On average, EVs have significantly higher problem rates than internal combustion engine (ICE) vehicles across model years 2019 and 2020.” BEVs dwell at the bottom of the rankings. Reliability may prove to be an Achilles heel to automakers like GM and Ford. GM CEO Mary Barra has very publicly promised that GM would no longer build “ crappy cars.” The ongoing problems with the Chevy Bolt undercuts that promise, and if its new Equinox EV has issues, it could hurt sales. Ford has reliability problems of its own, paying $4 billion in warranty costs last year alone. Its e-Mustang has been subject to several recalls over the past year. Even perceived quality-leader Toyota has been embarrassed by wheels falling off weeks after the introduction of its electric bZ4X SUV, the first in a new series “bZ”—beyond zero—electric vehicles. A Tesla caught up in a mudslide in Silverado Canyon, Calif., on March 10, 2021. Jae C. Hong/AP Photo Troubles with vehicle electronics, which has plagued ICE vehicles as well for some time, seems even worse in EVs according to Consumer Report’s data. This should not be surprising, since EVs are packed with the latest electronic and software features to make them attractive, like new biometric capability, but they often do not work. EV start-up Lucid is struggling with a range of software woes, and software problems have pushed back launches years at Audi, Porsche and Bentley EVs, which are part of Volkswagen Group. Another reliability risk-related issue is getting an EV repaired when something goes awry, or there is an accident. Right now, there is a dearth of EV-certified mechanics and repair shops. The UK Institute of the Motor Industry (IMI) needs 90,000 EV-trained technicians by 2030. The IMI estimates that less than 7 percent of the country’s automotive service workforce of 200,000 vehicle technicians is EV qualified. In the US, the situation is not better. The National Institute for Automotive Service Excellence (ASE), which certifies auto repair technicians, says the US has 229,000 ASE-certified technicians. However, there are only some 3,100 certified for electric vehicles. With many automakers moving to reduce their dealership networks, resolving problems that over-the-air (OTA) software updates cannot fix might be troublesome. Furthermore, the costs and time needed to repair an EV are higher than for ICE vehicles, according to the data analytics company CCC. Reasons include a greater need to use original equipment manufacturer (OEM) parts and the cost of scans/recalibration of the advanced driver assistance systems, which have been rising for ICE vehicles as well. Furthermore, technicians need to ensure battery integrity to prevent potential fires. And some of batteries along with their battery management systems need work. Two examples: Recalls involving the GM Bolt and Hyundai Kona, with the former likely to cost GM $1.8 billion and Hyundai $800 million to fix, according to Stout’s 2021 Automotive Defect and Recall Report. Furthermore, the battery defect data compiled by Stout indicates “incident rates are rising as production is increasing and incidents commonly occur across global platforms,” with both design and manufacturing defects starting to appear. For a time in New York City, one had to be a licensed engineer to drive a steam-powered auto. In some aspects, EV drivers return to these roots. This might change over time, but for now it is a serious issue.” —John Leslie King CCC data indicate that when damaged, battery packs do need replacement after a crash, and more than 50 percent of such vehicles were deemed a total loss by the insurance companies. EVs also need to revisit the repair center more times after they’ve been repaired than ICE vehicles, hinting at the increased difficulty in repairing them. Additionally, EV tire tread wear needs closer inspection than on ICE vehicles. Lastly, as auto repair centers need to invest in new equipment to handle EVs, these costs will be passed along to customers for some time. Electric vehicle and charging network cybersecurity is also growing as a perceived risk. A 2021 survey by insurance company HSB found that an increasing number of drivers, not only of EVs but ICE vehicles, are concerned about their vehicle’s security. Some 10 percent reported “a hacking incident or other cyber-attack had affected their vehicle,” HSB reported. Reports of charging stations being compromised are increasingly common. The risk has reached the attention of the US Office of the National Cyber Director, which recently held a forum of government and automaker, suppliers and EV charging manufacturers focusing on “cybersecurity issues in the electric vehicle (EV) and electric vehicle supply equipment (EVSE) ecosystem.” The concern is that EV uptake could falter if EV charging networks are not perceived as being secure. A sleeper risk that may explode into a massive problem is an EV owner’s right-to-repair their vehicle. In 2020, Massachusetts passed a law that allows a vehicle owner to take it to whatever repair shop they wish and gave independent repair shops the right to access the real-time vehicle data for diagnosis purposes. Auto dealers have sued to overturn the law, and some auto makers like Subaru and Kia have disabled the advanced telematic systems in cars sold in Massachusetts, often without telling new customers about it. GM and Stellantis have also said they cannot comply with the Massachusetts law, and are not planning to do so because it would compromise their vehicles’ safety and cybersecurity. The Federal Trade Commission is looking into the right-to-repair issue, and President Biden has come out in support of it. You expect me to do what, exactly? Failure to change consumer behavior poses another major risk to the EV transition. Take charging. It requires a new consumer behavior in terms of understanding how and when to charge, and what to do to keep an EV battery healthy. The information on the care and feeding of a battery as well as how to maximize vehicle range can resemble a manual for owning a new, exotic pet. It does not help when an automaker like Ford tells its F-150 Lightning owners they can extend their driving range by relying on the heated seats to stay warm instead of the vehicle’s climate control system. Keeping in mind such issues, and how one might work around them, increases a driver’s cognitive load—things that must be remembered in case they must be acted on. “Automakers spent decades reducing cognitive load with dash lights instead of gauges, or automatic instead of manual transmissions,” says University of Michigan professor emeritus John Leslie King, who has long studied human interactions with machines. King notes, “In the early days of automobiles, drivers and chauffeurs had to monitor and be able to fix their vehicles. They were like engineers. For a time in New York City, one had to be a licensed engineer to drive a steam-powered auto. In some aspects, EV drivers return to these roots. This might change over time, but for now it is a serious issue.” The first-ever BMW iX1 xDrive30, Mineral White metallic, 20“ BMW Individual Styling 869i BMW AG This cognitive load keeps changing as well. For instance, “common knowledge” about when EV owners should charge is not set in concrete. The long-standing mantra for charging EV batteries has been do so at home from at night when electricity rates were low and stress on the electric grid was low. Recent research from Stanford University says this is wrong, at least for Western states. Stanford’s research shows that electricity rates should encourage EV charging during the day at work or at public chargers to prevent evening grid peak demand problems, which could increase by as much as 25 percent in a decade. The Wall Street Journal quotes the study’s lead author Siobhan Powell as saying if everyone were charging their EVs at night all at once, “it would cause really big problems.” Asking EV owners to refrain from charging their vehicles at home during the night is going to be difficult, since EVs are being sold on the convenience of charging at home. Transportation Secretary Pete Buttigieg emphasized this very point when describing how great EVs are to own, “And the main charging infrastructure that we count on is just a plug in the wall.” EV owners increasingly find public charging unsatisfying and is “one of the compromises battery electric vehicle owners have to make,” says Strategic Vision’s Alexander Edwards, “that drives 25 percent of battery electric vehicle owners back to a gas powered vehicle.” Fixing the multiple problems underlying EV charging will not likely happen anytime soon. Another behavior change risk relates to automakers’ desired EV owner post-purchase buying behavior. Automakers see EV (and ICE vehicle) advanced software and connectivity as a gateway to a software-as-a-service model to generate new, recurring revenue streams across the life of the vehicle. Automakers seem to view EVs as razors through which they can sell software as the razor blades. Monetizing vehicle data and subscriptions could generate $1.5 trillion by 2030, according to McKinsey. VW thinks that it will generate “triple-digit-millions” in future sales through selling customized subscription services, like offering autonomous driving on a pay-per-use basis. It envisions customers would be willing to pay 7 euros per hour for the capability. Ford believes it will earn $20 billion, Stellantis some $22.5 billion and GM $20 to $25 billion from paid software-enabled vehicle features by 2030. Already for ICE vehicles, BMW is reportedly offering an $18 a month subscription (or $415 for “unlimited” access) for heated front seats in multiple countries, but not the U.S. as of yet. GM has started charging $1,500 for a three-year “optional” OnStar subscription on all Buick and GMC vehicles as well as the Cadillac Escalade SUV whether the owner uses it or not. And Sony and Honda have announced their luxury EV will be subscription-based, although they have not defined exactly what this means in terms of standard versus paid-for features. It would not be surprising to see it follow Mercedes’ lead. The automaker will increase the acceleration of its EQ series if an owner pays a $1,200 a year subscription fee. Essentially, automakers are trying to normalize paying for what used to be offered as standard or even an upgrade option. Whether they will be successful is debatable, especially in the U.S. “No one is going to pay for subscriptions,” says Strategic Vision’s Edwards, who points out that microtransactions are absolutely hated in the gaming community. Automakers risk a major consumer backlash by using them. To get to EV at scale, each of the EV-related range, affordability, reliability and behavioral changes risks will need to be addressed by automakers and policy makers alike. With dozens of new battery electric vehicles becoming available for sale in the next two years, potential EV buyers now have a much great range of options than previously. The automakers who manage EV risks best— along with offering compelling overall platform performance—will be the ones starting to claw back some of their hefty EV investments. No single risk may be a deal breaker for an early EV adopter, but for skeptical ICE vehicle owners, each risk is another reason not to buy, regardless of perceived benefits offered. If EV-only families are going to be the norm, the benefits of purchasing EVs will need to be above—and the risks associated with owning will need to match or be below—those of today’s and future ICE vehicles. In the next articles of this series, we’ll explore the changes that may be necessary to personal lifestyles to achieve 2050 climate goals.

  • New Video “Just Scratching the Surface” of What Atlas Can Do
    by Evan Ackerman on 23. January 2023. at 15:00

    With Boston Dynamics’ recent(ish) emphasis on making robots that can do things that are commercially useful, it’s always good to be gently reminded that the company is still at the cutting edge of dynamic humanoid robotics. Or in this case, forcefully reminded. In its latest video, Boston Dynamics demonstrates some spectacular new capabilities with Atlas focusing on perception and manipulation, and the Atlas team lead answers some of our questions about how they pulled it off. One of the highlights here is Atlas’s ability to move and interact dynamically with objects, and especially with objects that have significant mass to them. The 180 while holding the plank is impressive, since Atlas has to account for all that added momentum. Same with the spinning bag toss: As soon as the robot releases the bag in midair, its momentum changes, which it has to compensate for on landing. And shoving that box over has to be done by leaning into it, but carefully, so that Atlas doesn’t topple off the platform after it. While the physical capabilities that Atlas demonstrates here are impressive (to put it mildly), this demonstration also highlights just how much work remains to be done to teach robots to be useful like this in an autonomous, or even a semi-autonomous, way. For example, environmental modification is something that humans do all the time, but we rely heavily on our knowledge of the world to do it effectively. I’m pretty sure that Atlas doesn’t have the capability to see a nontraversable gap, consider what kind of modification would be required to render the gap traversable, locate the necessary resources (without being told where they are first), and then make the appropriate modification autonomously in the way a human would—the video shows advances in manipulation rather than decision making. This certainly isn’t a criticism of what Boston Dynamics is showing in this video; it’s just to emphasize there is still a lot of work to be done on the world understanding and reasoning side before robots will be able to leverage these impressive physical skills on their own in a productive way. There’s a lot more going on in this video, and Boston Dynamics has helpfully put together a bit of a behind-the-scenes explainer: And for a bit more on this, we sent a couple of questions over to Boston Dynamics, and Atlas Team Lead Scott Kuindersma was kind enough to answer them for us. How much does Atlas know in advance about the objects that it will be manipulating, and how important is this knowledge for real-world manipulation? Scott Kuindersma: In this video, the robot has a high-level map that includes where we want it to go, what we want it to pick up, and what stunts it should do along the way. This map is not an exact geometric match for the real environment; it is an approximate description containing obstacle templates and annotated actions that is adapted online by the robot’s perception system. The robot has object-relative grasp targets that were computed offline, and the model-predictive controller (MPC) has access to approximate mass properties. We think that real-world robots will similarly leverage priors about their tasks and environments, but what form these priors take and how much information they provide could vary a lot based on the application. The requirements for a video like this lead naturally to one set of choices—and maybe some of those requirements will align with some early commercial applications—but we’re also building capabilities that allow Atlas to operate at other points on this spectrum. How often is what you want to do with Atlas constrained by its hardware capabilities? At this point, how much of a difference does improving hardware make, relative to improving software? Kuindersma: Not frequently. When we occasionally spend time on something like the inverted 540, we are intentionally pushing boundaries and coming at it from a place of playful exploration. Aside from being really fun for us and (hopefully) inspiring to others, these activities nearly always bear enduring fruit and leave us with more capable software for approaching other problems. The tight integration between our hardware and software groups—and our ability to design, iterate, and learn from each other—is one of the things that makes our team special. This occasionally leads to behavior-enabling hardware upgrades and, less often, major redesigns. But from a software perspective, we continuously feel like we’re just scratching the surface on what we can do with Atlas. Can you elaborate on the troubleshooting process you used to make sure that Atlas could successfully execute that final trick without getting tangled in its own limbs? Kuindersma: The controller works by using a model of the robot to predict and optimize its future states. The improvement made in this case was an extension to this model to include the geometric shape of the robot’s limbs and constraints to prevent them from intersecting. In other words, rather than specifically tuning this one behavior to avoid self-collisions, we added more model detail to the controller to allow it to better avoid infeasible configurations. This way, the benefits carry forward to all of Atlas’s behaviors. Is the little hop at the end of the 540 part of the planned sequence, or is Atlas able to autonomously use motions like that to recover from dynamic behaviors that don’t end up exactly as expected? How important will this kind of capability be for real-world robots? Kuindersma: The robot has the ability to autonomously take steps, lean, and/or wave its limbs around to recover balance, which we leverage on pretty much a daily basis in our experimental work. The hop jump after the inverted 540 was part of the behavior sequence in the sense that it was told that it should jump after landing, but where it jumped to and how it landed came from the controller (and generally varied between individual robots and runs). Our experience with deploying Spot all over the world has reinforced the importance for mobile robots to be able to adjust and recover if they get bumped, slip, fall, or encounter unexpected obstacles. We expect the same will be true for future robots doing work in the real world. What else can you share with us about what went into making the video? Kuindersma: A few fun facts: The core new technologies around MPC and manipulation were developed throughout this year, but the time between our whiteboard sketch for the video and completing filming was six weeks. The tool bag throw and spin jump with the 2- by 12-inch plank are online generalizations of the same 180 jump behavior that was created two years ago as part of our mobility work. The only differences in the controller inputs are the object model and the desired object motion. Although the robot has a good understanding of throwing mechanics, the real-world performance was sensitive to the precise timing of the release and whether the bag cloth happened to get caught on the finger during release. These details weren’t well represented by our simulation tools, so we relied primarily on hardware experiments to refine the behavior until it worked every time.

  • Designing a Miniaturized Wastewater Treatment Plant for Micropollutant Degradation
    by Rachel Keatley on 23. January 2023. at 13:00

    This sponsored article is brought to you by COMSOL. The 1985 action-adventure TV series MacGyver showcased the life of Angus MacGyver, a secret agent who solved problems using items he had on hand. For example, in one episode, he made a heat shield out of used refrigerator parts. In another, he made a fishing lure with a candy wrapper. More than three decades later, the show still has relevance. The verb MacGyver, to design something in a makeshift or creative way, was added to the Oxford English Dictionary in 2015. Try putting your MacGyver skills to the test: If you were handed some CDs, what would you make out of them? Reflective wall art, mosaic ornaments, or a wind chime, perhaps? What about a miniaturized water treatment plant? This is what a team of engineers and researchers are doing at Eden Tech, a company based in Paris, France, that specializes in the development of microfluidics technology. Within their R&D department, Eden Cleantech, they are developing a compact, energy-saving water treatment system to help tackle the growing presence of micropollutants in wastewater. To analyze the performance of their AKVO system (named after the Latin word for water, aqua), which is made from CDs, Eden Tech turned to multiphysics simulation. Contaminants of Emerging Concern “There are many ways micropollutants make it into wastewater,” says Wei Zhao, a senior chemical engineer and chief product officer at Eden Tech. The rise of these microscopic chemicals in wastewater worldwide is a result of daily human activities. For instance, when we wash our hands with soap, wipe down our sinks with cleaning supplies, or flush medications out of our bodies, various chemicals are washed down the drain and end up in sewage systems. Some of these chemicals are classified as micropollutants, or contaminants of emerging concern (CECs). In addition to domestic waste, agricultural pollution and industrial waste are also to blame for the rise of micropollutants in our waterways. Micropollutants are added to the world’s lakes, rivers, and streams every day. Many conventional wastewater treatment plants are not equipped to remove these potentially hazardous chemical residues from wastewater. Unfortunately, many conventional wastewater treatment plants (WWTP, Figure 1) are not designed to remove these contaminants. Therefore, they are often reintroduced to various bodies of water, including rivers, streams, lakes, and even drinking water. Although the risk they pose to human and environmental health is not fully understood, the increasing number of pollution found in the world’s bodies of water is of concern. With this growing problem in mind, Eden Tech got to work on developing a solution, thus AKVO was born. Each AKVO CD core is designed to have a diameter of 15 cm and a thickness of 2 mm. One AKVO cartridge is composed of stacked CDs of varying numbers, combined to create a miniaturized factory. One AKVO core treats 0.5 to 2 m3 water/day, which means that an AKVO system composed of 10,000 CDs can treat average municipal needs. This raises the question: How can a device made from CDs decontaminate water? A Sustainable Wastewater Treatment Method A single AKVO system (Figure 2) consists of a customizable cartridge filled with stacked CDs that each have a microchannel network inscribed on them. It removes undesirable elements in wastewater, like micropollutants, by circulating the water in its microchannel networks. These networks are energy savvy because they only require a small pump to circulate and clean large volumes of water. The AKVO system’s cartridges can easily be replaced, with Eden Tech taking care of their recycling. AKVO’s revolutionary design combines photocatalysis and microfluidics into one compact system. Photocatalysis, a type of advanced oxidation process (AOP), is a fast and effective way to remove micropollutants from wastewater. Compared to other AOPs, it is considered safer and more sustainable because it is powered by a light source. During photocatalysis, light is absorbed by photocatalysts that have the ability to create electron-hole pairs, which generate free hydroxyl radicals that are able to react with target pollutants and degrade them. The combination of photocatalysis and microfluidics for the treatment of wastewater has never been done before. “It is a very ambitious project,” said Zhao. “We wanted to develop an innovative method in order to provide an environmentally friendly, efficient way to treat wastewater.” AKVO’s current design did not come easy, as Zhao and his team faced several design challenges along the way. Overcoming Design Challenges When in use, a chemical agent (catalyst) and wastewater are dispersed through AKVO’s microchannel walls. The purpose of the catalyst, titanium dioxide in this case, is to react with the micropollutants and help remove them in the process. However, AKVO’s fast flow rate complicates this action. “The big problem is that [AKVO] has microchannels with fast flow rates, and sometimes when we put the chemical agent inside one of the channels’ walls, the micropollutants in the wastewater cannot react efficiently with the agent,” said Zhao. In order to increase the opportunity of contact between the micropollutants and the immobilized chemical agent, Zhao and his team opted to use a staggered herringbone micromixer (SHM) design for AKVO’s microchannel networks (Figure 3). To analyze the performance of the SHM design to support chemical reactions for micropollutant degradation, Zhao used the COMSOL Multiphysics software. Simulating Chemical Reactions for Micropollutant Degradation In his work, Zhao built two different models in COMSOL Multiphysics (Figure 4), named the Explicit Surface Adsorption (ESA) model and the Converted Surface Concentration (CSC) model. Both of these models account for chemical and fluid phenomena. In both models, Zhao found that AKVO’s SHM structure creates vortices in the flow moving through it, which enables the micropollutants and the chemical agent to have a longer reaction period and enhances the mass transfer between each fluid layer. However, the results of the ESA model displayed that the design purified about 50 percent of the micropollutants under treatment, fewer than what Zhao expected. Unlike the ESA model (Figure 5), in the CSC model, it is assumed that there is no adsorption limitation. Therefore, as long as a micropollutant arrives at the surface of a catalyst, a reaction happens, which has been discussed in existing literature (Ref. 1). In this model, Zhao analyzed how the design performed for the degradation of six different micropollutants, including gemfibrozil, ciprofloxacin, carbamazepine, clofibric acid, bisphenol A, and acetaminophen (Figure 6). The results of this model were in line with what Zhao expected, with more than 95 percent of the micropollutants being treated. “We are really satisfied with the results of COMSOL Multiphysics. My next steps will be focused on laboratory testing [of the AKVO prototype]. We are expecting to have our first prototype ready by the beginning of 2022,” said Zhao. The prototype will eventually be tested at hospitals and water treatment stations in the south of France. Using simulation for this project has helped the Eden Tech team save time and money. Developing a prototype of a microfluidic system, like AKVO, is costly. To imprint microchannel networks on each of AKVO’s CDs, a microchannel photomask is needed. According to Zhao, to fabricate one photomask would cost about €3000 (3500 USD). Therefore, it is very important that they are confident that their system works well prior to its fabrication. “COMSOL Multiphysics has really helped us validate our models and our designs,” said Zhao. Pioneer in the Treatment of Micropollutants In 2016, Switzerland introduced legislation mandating that wastewater treatment plants remove micropollutants from wastewater. Their goal? Filter out over 80 percent of micropollutants at more than 100 Swiss WWTPs. Following their lead, many other countries are currently thinking of how they want to handle the growing presence of these contaminants in their waterways. AKVO has the potential to provide a compact, environmentally friendly way to help slow this ongoing problem. The next time you go to throw out an old CD, or any other household item for that matter, ask yourself: What would MacGyver do? Or, better yet: What would Eden Tech do? You might be holding the building blocks for their next innovative design. Reference C. S. Turchi, D. F. Ollis, “Photocatalytic degradation of organic water contaminants: Mechanisms involving hydroxyl radical attack,” Journal of Catalysis, Vol. 122, p. 178, 1990. MacGyver is a registered trademark of CBS Studios Inc. COMSOL AB and its subsidiaries and products are not affiliated with, endorsed by, sponsored by, or supported by CBS Studios Inc.

  • Video Friday: Drones in Trees
    by Evan Ackerman on 20. January 2023. at 17:48

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREA RoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCE CLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZIL RSS 2023: 10–14 July 2023, DAEGU, KOREA ICRA 2023: 29 May–2 June 2023, LONDON Robotics Summit & Expo: 10–11 May 2023, BOSTON Enjoy today’s videos! With the historic Kunming-Montreal Agreement of 18 December 2022, more than 200 countries agreed to halt and reverse biodiversity loss. But becoming nature-positive is an ambitious goal, also held back by the lack of efficient and accurate tools to capture snapshots of global biodiversity. This is a task where robots, in combination with environmental DNA (eDNA) technologies, can make a difference.Our recent findings show a new way to sample surface eDNA with a drone, which could be helpful in monitoring biodiversity in terrestrial ecosystems. The eDrone can land on branches and collect eDNA from the bark using a sticky surface. The eDrone collected surface eDNA from the bark of seven different trees, and by sequencing the collected eDNA we were able to identify 21 taxa, including insects, mammals, and birds. [ ETH Zurich ] Thanks, Stefano! How can we bring limbed robots into real-world environments to complete challenging tasks? Dr. Dimitrios Kanoulas and the team at UCL Computer Science’s Robot Perception and Learning Lab are exploring how we can use autonomous and semi-autonomous robots to work in environments that humans cannot. [ RPL UCL ] Thanks, Dimitrios! Bidirectional design, four-wheel steering, and a compact length give our robotaxi unique agility and freedom of movement in dense urban environments—or in games of tic-tac-toe. May the best robot win. Okay, but how did they not end this video with one of the cars drawing a “Z” off to the left side of the middle row? [ Zoox ] Thanks, Whitney! DEEP Robotics wishes y’all happy, good health in the year of the rabbit! Binkies! [ Deep Robotics ] This work presents a safety-critical locomotion-control framework for quadrupedal robots. Our goal is to enable quadrupedal robots to safely navigate in cluttered environments. [ Hybrid Robotics ] At 360.50 kilometers per hour, this is the world speed record for a quadrotor. [ Quad Star Drones ] via [ Gizmodo ] When it rains, it pours—and we’re designing the Waymo Driver to handle it. See how shower tests, thermal chambers, and rugged tracks at our closed-course facilities ensure our system can navigate safely, no matter the forecast. [ Waymo ] You know what’s easier than picking blueberries? Picking greenberries, which are much less squishy. [ Sanctuary AI ] The Official Wrap-Up of ABU ROBOCON 2022 New Delhi, India. [ ROBOCON ]

  • Overcoming Systemic Racism Through System Engineering
    by Robb Mandelbaum on 20. January 2023. at 16:21

    In parts of the United States, using the term “systemic racism” to refer to persistent discrimination against Black people has become a political flash point. To some ears, it sounds like an attack on the country and the local community. Several states have enacted laws that ban, or would appear to ban, discussing the concept in public schools and colleges, and even private workplaces. But racial-equity consultant Tynesia Boyea-Robinson uses the term with an engineer’s precision. When she first heard the phrase, she recalled her training in quality control in the transportation unit of GE Research, in Erie, Pa. And, sure enough, a lightbulb went on in her head: The system could be reengineered. “Oh my God, we can fix this!” she thought. “I don’t think everybody else sees it that way.” Boyea-Robinson helps companies, government agencies, and other organizations meet goals for diversity and equity through her consulting firm, CapEQ. In October, her second book on this work, The Social Impact Advantage, was published. And she is the steward of Path to 15/55, an ambitious effort to deliver desperately needed capital to Black businesses across the United States. Since 2018, Boyea-Robinson has been assembling a coalition—including financial institutions, grassroots community groups, political and policy leaders, and corporate and philanthropic donors—to reprogram the systems of lending to and investing in these businesses. Employer CapEQ Title President and CEO Alma mater Duke University’s Pratt School of Engineering Boyea-Robinson helps companies, government agencies, and other organizations meet goals for diversity and equity through her consulting firm, CapEQ. In October, her second book on this work, The Social Impact Advantage, was published. And she is the steward of Path to 15/55, an ambitious effort to deliver desperately needed capital to Black businesses across the United States. Since 2018, Boyea-Robinson has been assembling a coalition—including financial institutions, grassroots community groups, political and policy leaders, and corporate and philanthropic donors—to reprogram the systems of lending to and investing in these businesses. Boyea-Robinson grew up in Cocoa Beach, Fla., where her father fixed satellites for the U.S. Air Force and her stepmother gave manicures in the family’s living room. In other circumstances, the straight As Boyea-Robinson earned at school and the lessons in mechanics her dad taught her might have ensured a trajectory toward a top STEM university. But her parents hadn’t gone to college and didn’t push her in that direction. Moreover, as the oldest, she was expected to help care for her four younger siblings. She expected to enroll at a community college until one of her stepmother’s clients pushed her to set her sights higher. She attended Duke University’s Pratt School of Engineering, in Durham, N.C., where she earned a dual bachelor’s degree in electrical engineering and computer science. The curriculum was daunting, and she had to confront a persistent sense of being an outsider. But it was more than just the academics. “There’s so many things about the culture of college that my parents couldn’t teach me,” she says. Adding to her initial anxiety was her status as one of the relatively few women at the engineering school—women made up just a quarter of the student body at Pratt—and there were even fewer Black students enrolled there (around 5 percent). But when Boyea-Robinson graduated in 1999, she landed a plum ­information-management job at General Electric through the company’s prestigious leadership program. Though her anxiety about fitting in lingered, her career flourished. In 2003, she headed to Harvard Business School for an MBA that could give her upward trajectory an extra boost. Then her course changed when she took an internship at a nonprofit called Year Up. The organization helps prepare young adults, mostly poorer people of color, for entry-level IT jobs at large companies—jobs that recalled her first assignments at GE. “That student was me,” she says, “with different options and choices.” Her assignment was to map out an expansion of Year Up from Boston to either Washington, D.C., or New York City. Boyea-Robinson pitched both. When she graduated in 2005, the nonprofit hired her to open the Washington location. She launched the first class in January 2006, and as she built Year Up’s presence in Washington, ­Boyea-Robinson’s work became a model for the organization nationwide, starting in New York later that year. Today, the nonprofit serves 16 metro areas and operates virtually in five others. At Year Up, Boyea-Robinson began to hear about systemic racism, the biases that people collectively inject, consciously or not, into so many of the institutions and the rules governing society, leading to the disparate treatment of different groups of people. The knock-on effects from that discrimination exacerbate inequality—which then reinforces those biases in a sort of feedback loop. Thinking about all this, Boyea-Robinson concluded that she wanted to use systems engineering to tackle the problems of systemic racism on a larger scale. Since launching CapEQ in 2011, Boyea-Robinson has worked with more than 50 clients, helping businesses such as Marriott and Nordstrom address their diversity and equity shortcomings. She has also worked with nonprofits and others seeking broader change, including those collaborating on Path to 15/55. Path to 15/55 takes its premise from recent research by one of those organizations, the Association for Enterprise Opportunity, a trade group of nonprofits that make small loans to underserved entrepreneurs. The group found that if 15 percent of existing Black businesses could finance a single new employee, it would create US $55 billion in new economic activity. But Black entrepreneurs have been hobbled by the effects of an especially pernicious example of systemic racism. Until the 1960s, federal government policies explicitly prohibited Black people from buying homes in white neighborhoods and simultaneously decimated the value of Black neighborhoods. The result has been to deny most Black families the opportunity to build generational wealth on par with their white counterparts. Even today, Blacks are less likely to seek, or obtain, a home mortgage. Most small businesses are financed by savings or loans conditioned on good credit scores and a home that serves as collateral. The coalition Boyea-Robinson assembled is pressing for systemic change on several levels. It’s pushing bankers and the financial industry at large to confront their own biases in lending. It also disseminates novel strategies for financing Black businesses to avoid the barriers that Black borrowers face, such as the use of credit scores to assess creditworthiness. The group will then rigorously collect data on which strategies work and which don’t to propagate what’s successful. Separately, it’s agitating for government policy changes to allow these new strategies to flourish. Boyea-Robinson manages Path to 15/55 as if she were testing software with a feedback loop of its own. It starts with building awareness around a specific issue and forging alliances, or alignments, with like-minded organizations, which then go to work as communities of action to implement change. “Everything we learn from communities of action becomes the information that we raise awareness on,” she says. “And the loop starts again: awareness, alignment, action. These are all unit tests that become systems tests.” Boyea-Robinson still finds resistance to financing equity among bank loan officers. “The way racism shows up in lending is bankers saying that this work is not investable,” she says. “Shifting the narrative is why we spend so much time sharing reports and stories.” Backed with a $250,000 grant from the Walmart Foundation, Path to 15/55 launched its first Community of Action in January. Piggybacking on work led by the Beneficial State Foundation, ­Boyea-Robinson has recruited five financial institutions to experiment with innovative ways to underwrite loans, and to build durable support within their organizations for the work—which, ­Boyea-Robinson says, is the only way these changes will stick. These institutions are expected to begin lending money by midyear. To lessen the risk of losses, Path to 15/55 will make the $1 million it has raised so far available for these loans. And she’s joining forces with business accelerators to launch a second community of action, aimed at helping Black entrepreneurs buy existing businesses in corporate supply chains, later this year. “Being able to kind of turbocharge work that is already compelling,” she says, “has been pretty exciting.” This article appears in the February 2023 print issue as “Tynesia Boyea-Robinson.”

  • Picosecond Accuracy in Multi-channel Data Acquisition
    by Teledyne on 20. January 2023. at 13:58

    Timing accuracy is vital for multi-channel synchronized sampling at high speed. In this webinar, we explain challenges and solutions for clocking, triggering, and timestamping in Giga-sample-per-second data acquisition systems. Learn more about phase-locked sampling, clock and trigger distribution, jitter reduction, trigger correction, record alignment, and more. Register now to join this free webinar! Date: Tuesday, February 28, 2023 Time: 10 AM PST | 1 PM ESTDuration: 30 minutes In this webinar, we explain challenges and solutions for clocking, triggering, and timestamping in Giga-sample-per-second data acquisition systems. Topics covered in this webinar: Phase-locked sampling Clock and trigger distribution Trigger correction and record alignment Daisy-chaining to achieve 50 ps trigger accuracy for 64 channels sampling at 5 GSPS per channel Who should attend? Developers that want to learn more about how to optimize performance in high-performance multi-channel systems. What attendees will learn? How to distribute clocks and triggers, triggering methods, synchronized sampling on multiple boards, and more. Presenter: Thomas Elter, Senior Field Applications Engineer

  • Andrew Ng: Unbiggen AI
    by Eliza Strickland on 9. February 2022. at 15:31

    Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A. Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias. Andrew Ng on... What’s next for really big models The career advice he didn’t listen to Defining the data-centric AI movement Synthetic data Why Landing AI asks its customers to do the work The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way? Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions. When you say you want a foundation model for computer vision, what do you mean by that? Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them. What needs to happen for someone to build a foundation model for video? Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision. Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries. Back to top It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users. Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation. “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.” —Andrew Ng, CEO & Founder, Landing AI I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince. I expect they’re both convinced now. Ng: I think so, yes. Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.” Back to top How do you define data-centric AI, and why do you consider it a movement? Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data. When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline. The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up. You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them? Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn. When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set? Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system. “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.” —Andrew Ng For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance. Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training? Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle. One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way. When you talk about engineering the data, what do you mean exactly? Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity. For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow. Back to top What about using synthetic data, is that often a good solution? Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development. Do you mean that synthetic data would allow you to try the model on more data sets? Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category. “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.” —Andrew Ng Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data. Back to top To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment? Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data. One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory. How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up? Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations. In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists? So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work. Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains. Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement? Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it. Back to top This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”

  • How AI Will Change Chip Design
    by Rina Diane Caballar on 8. February 2022. at 14:00

    The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process. Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version. But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform. How is AI currently being used to design the next generation of chips? Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider. Heather GorrMathWorks Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI. What are the benefits of using AI for chip design? Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design. So it’s like having a digital twin in a sense? Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end. So, it’s going to be more efficient and, as you said, cheaper? Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering. We’ve talked about the benefits. How about the drawbacks? Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it's not going to be as accurate as that precise model that we’ve developed over the years. Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It's a case where you might have models to predict something and different parts of it, but you still need to bring it all together. One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge. How can engineers use AI to better prepare and extract insights from hardware or sensor data? Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start. One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI. What should engineers and designers consider when using AI for chip design? Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team. How do you think AI will affect chip designers’ jobs? Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip. How do you envision the future of AI and chip design? Gorr: It's very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.

  • Atomically Thin Materials Significantly Shrink Qubits
    by Dexter Johnson on 7. February 2022. at 16:12

    Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality. IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability. Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100. “We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.” The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit. Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C). Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another. As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance. In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates. “We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics. On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas. While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor. “What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.” This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits. “The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang. Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.

640px-NATO_OTAN_landscape_logo.svg-2627358850
BHTelecom_Logo
roaming
RIPE_NCC_Logo2015-1162707916
MON_4
mibo-logo
intel_logo-261037782
infobip
bhrt-zuto-logo-1595455966
elektro
eplus_cofund_text_to_right_cropped-1855296649
fmon-logo
h2020-2054048912
H2020_logo_500px-3116878222
huawei-logo-vector-3637103537
Al-Jazeera-Balkans-Logo-1548635272
previous arrowprevious arrow
next arrownext arrow