IEEE News

IEEE Spectrum IEEE Spectrum

  • This IEEE Society’s Secret to Boosting Student Membership
    by Kathy Pretz on 25. April 2024. at 18:00

    What’s a secret to getting more students to participate in an IEEE society? Give them a seat at the table so they have a say in how the organization is run. That’s what the IEEE Robotics and Automation Society has done. Budding engineers serve on the RAS board of directors, have voting privileges, and work within technical committees. “They have been given a voice in how the society runs because, in the end, students are among the main beneficiaries,” says Enrica Tricomi, chair of the RAS’s student activities committee. The SAC is responsible for student programs and benefits. It also makes recommendations to the society’s board about new offerings. A Guide for Inspiring the Next Generation Roboticists The IEEE Robotics and Automation Society isn’t focused only on boosting its student membership. It also wants to get more young people interested in pursuing a robotics career. One way the society’s volunteers try to inspire the next generation of roboticists is through IEEE Spectrum’s award-winning Robots website. The interactive guide features more than 250 real-world robots, with thousands of photos, videos, and exclusive interactives, plus news and detailed technical specifications. The site is designed for anyone interested in robotics, including expert and beginner enthusiasts, researchers, entrepreneurs, students, STEM educators, and other teachers. Schools and students across the globe use the site. Volunteers on the RAS steering committee suggest robots to add, and they help support new content creation on the site. “You feel listened to and valued whenever there are official decisions to be made, because the board also wants to know the perspective of students on how to offer benefits to the RAS members, especially for young researchers, since hopefully they will be the society’s future leaders,” says Tricomi, a bioengineer who is pursuing a Ph.D. in robotics at Heidelberg University, in Germany. The society’s approach has paid off. Since 2018, student membership has grown by more than 50 percent to 5,436. The number of society chapters at student branches has increased from 312 in 2021 to 450. The ability to express opinions isn’t the only reason students are joining, Tricomi says. The society recently launched several programs to engage them, including career fairs, travel grants, and networking opportunities with researchers. Giving students leadership opportunities As SAC chair, Tricomi is a voting member of RAS’s administrative committee, which oversees the society’s operations. She says having voting privileges shows “how important it is to the society to have student representation.” “We receive a lot of support from the highest levels of the society, specifically the society president, Aude Billard, and past president Frank Chongwoo Park,” Tricomi says. “RAS boards have been rejuvenated to engage students even more and represent their voices. The chairs of these boards—including technical activities, conference activities, and publication activities—want to know the SAC chair and cochairs’ opinion on whether the new activities are benefiting students.” Student members now can serve on IEEE technical committees that involve robotics in the role of student representatives. That was an initiative from Kyujin Cho, IEEE Technical Activities vice president. Tricomi says the designation benefits young engineers because they learn about ongoing research in their field and because they have direct access to researchers. Student representatives also help organize conference workshops. The students had a hand in creating a welcome kit for conference attendees. The initiative, led by Amy Kyungwon Han, Technical Activities associate vice president, lists each day’s activities and their location. “I think that all of us, especially those who are younger, can actively contribute and make a difference not only for the society and for ourselves but also for our peers.” Being engaged with the technical topic in which the students work provides them with career growth, visibility in their field, and an opportunity to share their point of view with peers, Tricomi says. “Being young, the first time that you express your opinion in public, you always feel uncomfortable because you don’t have much experience,” she says. “This is the opposite of the message the society wants to send. We want to listen to students’ voices because they are an important part of the society.” Tricomi herself recently became a member of the Technical Activities board. She joined, she says, because “this is kind of a technical family by choice. And you want to be active and contribute to your family, right? I think that all of us, especially those who are younger, can actively contribute and make a difference not only for the society and for ourselves but also for our peers.” Job fairs and travel grants Several new initiatives have been rolled out at the society’s flagship conferences. The meetings have always included onsite events for students to network with each other and to mingle with researchers over lunch. The events give the budding engineers an opportunity to talk with leaders they normally wouldn’t meet, Tricomi says. “It’s much appreciated, especially by very young or shy students,” she says. Some luncheons have included sessions on career advice from leaders in academia and industry, or from startup founders—giving the students a sense of what it’s like to work for such organizations. Conferences now include career fairs, where students can meet with hiring companies. The society also developed a software platform that allows candidates to upload their résumé onsite. If they are a match for an open position, interviews can be held on the spot. A variety of travel grants have been made available to students with limited resources so they can present their research papers at the society’s major conferences. More than 200 travel grants were awarded to the 2023 IEEE International Conference on Robotics and Automation, Tricomi says. “It’s very important for them to be there, presenting their work, gaining visibility, sharing their research, and also networking,” she says. The new IDEA (inclusion, diversity, equity, and accessibility) travel grant for underrepresented groups was established by the society’s IEEE Women in Engineering committee and its chair, Karinne Ramirez Amaro. The grant can help students who are not presenters to attend conferences. It also helps increase diversity within the robotics field, Tricomi says. The Member Support Program is a new initiative from the RAS member activities board’s vice president, Katja Mombaur, and past vice president Stefano Stramigioli. Financial support to attend the annual International Conference on Intelligent Robots and Systems is available to members and students who have contributed to the society’s mission-related activities. The projects include organizing workshops, discussions, lectures, or networking events at conferences or sponsored events; serving on boards or committees; or writing papers that were accepted for publication by conferences or journals. The society also gets budding engineers involved in publication activities through its Young Reviewers Program, which introduces them to best practices for peer review. Senior reviewers assign the students papers to check and oversee their work. Personal and professional growth opportunities Tricomi joined the society in 2021 shortly after starting her Ph.D. program at Heidelberg. Her research is in wearable assistive robotics for human augmentation or rehabilitation purposes. She holds a master’s degree in biomedical engineering from Politecnico di Torino, in Italy. She was new to the field of robotics, so her Ph.D. advisor, IEEE Senior Member Lorenzo Masia, encouraged her to volunteer for the society. She is now transitioning to the role of SAC senior chair, and she says she is eager to collaborate with the new team to promote student and early career engagement within the robotics field. “I’ve realized I’ve grown up a lot in the two years since I started as chair,” she says. “At the beginning, I was much shier. I really want my colleagues to experience the same personal and professional growth as I have. You learn not only technical skills but also soft skills, which are very important in your career.”

  • Why Haven’t Hoverbikes Taken Off?
    by Willie D. Jones on 25. April 2024. at 15:10

    Ever since Return of the Jedi premiered in 1983, people have been imagining the day when they, like the film’s protagonist Luke Skywalker, would get to ride speeder bikes that zip across the landscape while hovering just a few meters above the ground. In the intervening years, there have been numerous claims made by companies that they’ve figured out how to make a real-world product that mimics movie magic. Their demos and PR campaigns have continued to whet the public’s appetite for hoverbikes, but there are some solid reasons why the nascent hoverbike industry has yet to get airborne. “it’s gonna happen, but I don’t know if it’ll happen in your lifetime and mine,” says Ronald Barrett-Gonzalez, an aerospace-engineering professor at the University of Kansas. “With the current approaches, I think it’s just going to be a while.” Barrett-Gonzalez was the advisor for a group of University of Kansas aerospace-engineering grad students who participated in the GoFly competition, sponsored by Boeing. The challenge—in which 3,800 teams from 100 countries participated—was to “design and build a safe, quiet, ultra-compact, near-VTOL personal flying device capable of flying 20 miles [32 kilometers] while carrying a single person.” The eventual grand prize winner was to have been awarded US $1 million, and $250,000 prizes were supposed to have gone to the quietest and the smallest compliant GoFly entries when the challenge concluded in September 2023. But the scheduled final fly-off between the teams whose personal aircraft were selected as the best-built entries in the second of the competition’s three phases was canceled because windy conditions made it unsafe for the machines to take to the skies. Solving the Physics of Hoverbikes “Helicopters, for a long time, have been built with relatively small engines for their size,” says Barrett-Gonzalez. “The way that such large vehicles can be lifted by small engines is that they have large rotor diameters. The volume of the column of air that they stand on is great. If you look at hoverbikes, the diameters of their rotors are much smaller. And physics says that you need more power per unit weight to lift an aircraft like that.” To get the efficiency that comes along with a helicopter’s extensive rotor sweep, hoverbike designers will likely have to give up the thought of these machines touching down in parking spots meant for cars, or at least wait for new generations of engines and electric motors with greater power density to appear along with batteries capable of delivering a lot more power and storing a lot more energy than those available today. Assessing Hoverbikes’ Risks Safety concerns are just as big a hurdle to making hoverbikes available for sale. The University of Kansas team’s GoFly entry, called Mamba, had been one of 10 Phase I winners for best design. The six-rotored hexcopter, which emphasized safety, certifiability, and performance, featured shrouded rotors and a tilting stabilizer surface. The University of Kansas’ GoFly entry is a red-and-black hexcopter with two large and two small horizontal rotors, and two vertically placed rotors in the back. University of Kansas But Mamba didn’t make it through Phase II, the build stage. Barrett-Gonzalez explains that “the kinds of safety criteria that we hold to are enforced by rules and regulations, such as the U.S. government’s FAR 23 airworthiness standards that govern small airplanes and FAR 27 standards for smaller helicopters.” That standard of safety, he says, is meant to ensure that the probability of a fatal event is no greater than one in 1 million flight hours. “For larger aircraft like the big Boeing commercial jets and larger helicopters, the standard is even more stringent. It’s one in 1 billion flight hours.” That focus on safety doesn’t come without a cost, Barrett-Gonzalez adds. “The current thing that is keeping an aircraft like the Mamba from going from the drawing board to reality is that it’s costly. We could do what a Star Wars podracer can do, but that’s a $3.2 million machine. And then, only maybe Elon Musk and half a dozen other people could afford it.” Several would-be hoverbike manufacturers have enticed potential buyers with price points more than an order of magnitude lower than Barrett-Gonzalez’s estimate. But Barrett-Gonzalez points out that they don’t include the combination of safety features built into the Mamba design. The Mamba has a roll cage, and the motors are cross-shafted. “So if you lose one motor you don’t come spiraling out of the sky,” he says. What’s more, the Mamba’s rotors are arranged according to a patented design that the team says makes it impossible for a rider or bystander to come in contact with the machine’s spinning blades. For anyone who might argue that the Mamba project imploded because of overdesign, Barrett-Gonzalez recalls the Mamba team having extensive briefings with the director of the U.S. Federal Aviation Administration’s small airplanes directorate. “And he put it plainly: ‘The FAA will not certify a human eggbeater,’” says Barrett-Gonzalez. “We could do what a Star Wars podracer can do, but that’s a $3.2 million machine. And then, only maybe Elon Musk and half a dozen other people could afford it.” —Ronald Barrett-Gonzalez, University of Kansas Hover (a hoverbike hopeful formerly known as Hoversurf) recently moved its headquarters from California back to Russia, and Joby Aviation decided to start its electric vertical-takeoff-and-landing (eVTOL) air taxi business in Dubai. These moves might not necessarily indicate their need to generate revenue before refinements to their designs will give them the ability to meet U.S. safety standards. But that explanation is as plausible as any. “Neither Russia nor Dubai have mature airborne safety standards that cover vehicles of this type,” says Barrett-Gonzalez. Where Are They Now? In 2014, IEEE Spectrum reported on Aerofex’s pledge to deliver a commercially available hoverbike, the Aero-X, by 2017. Spoiler alert: It didn’t happen. Though Aerofex is still in business, the company retired the Aero-X before the aircraft’s anticipated go-to-market date. The company proudly recalls the progress it made during Aero-X’s development, including kinesthetic control, which lets the pilot stabilize and control a personal aircraft by shifting their weight pretty much the same way one does when riding a bike. But 16 years after its 2008 maiden flight, the $85,000 Aero-X is still not available for sale. Seven years after its initial go-to-market date, Aerofex’s Aero-X is still not available for sale.Aerofex Meanwhile, Hover’s series of Scorpion hoverbike designs have gotten plenty of press attention. But as of this writing, the company’s flying motorcycles are still in the preorder stage, with no indication regarding when models like the $150,000 S-3 will be delivered to people who put down deposits. And Tetra Aviation, the Tokyo startup that won the $100,000 Pratt & Whitney Disruptor Award by the GoFly judges for its Mk-5 single-seat eVTOL vehicle, is also stuck in the development phase. Tetra said it planned to offer the Mk-5, with its 32 vertical lift rotors distributed across long, thin, aluminum-and-carbon-fiber wings and a single pusher prop at the rear, for $320,000, beginning in 2022. But the 8.5-meter wide, 6.1-meter-long machine, which is supposed to travel 160 kilometers (at speeds up to 160 kilometers per hour) on a single charge, is still in the preorder stage. According to the statements made by the companies seeking to market hoverbikes, the vehicles have been two or three years away for more than a decade. The market predictions made by these companies are starting to sound a lot like an old saw about nuclear fusion, which claims fusion has been “just 20 years away” for nearly 50 years.

  • Ukraine Is Riddled With Land Mines. Drones and AI Can Help
    by Jasper Baur on 25. April 2024. at 15:00

    Early on a June morning in 2023, my colleagues and I drove down a bumpy dirt road north of Kyiv in Ukraine. The Ukrainian Armed Forces were conducting training exercises nearby, and mortar shells arced through the sky. We arrived at a vast field for a technology demonstration set up by the United Nations. Across the 25-hectare field—that’s about the size of 62 American football fields—the U.N. workers had scattered 50 to 100 inert mines and other ordnance. Our task was to fly our drone over the area and use our machine learning software to detect as many as possible. And we had to turn in our results within 72 hours. The scale was daunting: The area was 10 times as large as anything we’d attempted before with our drone demining startup, Safe Pro AI. My cofounder Gabriel Steinberg and I used flight-planning software to program a drone to cover the whole area with some overlap, taking photographs the whole time. It ended up taking the drone 5 hours to complete its task, and it came away with more than 15,000 images. Then we raced back to the hotel with the data it had collected and began an all-night coding session. We were happy to see that our custom machine learning model took only about 2 hours to crunch through all the visual data and identify potential mines and ordnance. But constructing a map for the full area that included the specific coordinates of all the detected mines in under 72 hours was simply not possible with any reasonable computational resources. The following day (which happened to coincide with the short-lived Wagner Group rebellion), we rewrote our algorithms so that our system mapped only the locations where suspected land mines were identified—a more scalable solution for our future work. In the end we detected 74 mines and ordnance scattered across the surface of that enormous field, and the U.N. deemed our results impressive enough to invite us back for a second round of demonstrations. While we were in Ukraine, we also demonstrated our technology for the State Special Transportation Service, a branch of the Ukrainian military responsible for keeping roads and bridges open. All our hard work paid off. Today, our technology is being used by several humanitarian nonprofits detecting land mines in Ukraine, including the Norwegian People’s Aid and the HALO Trust, which is the world’s largest nonprofit dedicated to clearing explosives left behind after wars. Those groups are working to make Ukraine’s roads, towns, and agricultural fields safe for the Ukrainian people. Our goal is to make our technology accessible to every humanitarian demining operation, making their jobs safer and more efficient. To that end, we’re deploying and scaling up—first across Ukraine, and soon around the world. The Scale of the Land-Mine Problem The remnants of war linger long after conflicts have died down. Today, an estimated 60 countries are still contaminated by mines and unexploded ordnance, according to the 2023 Landmine Monitor report. These dangers include land mines, improvised explosive devices, and shells and artillery that didn’t explode on landing—all together, they’re known as explosive ordnance (EO). More than 4,700 people were killed or wounded by EO in 2022, according to the Landmine Monitor report, and the vast majority of those casualties were civilians. Today, Ukraine is the most contaminated place in the world. About a third of its land—an area the size of Florida—is estimated to contain EO. In humanitarian mine-clearing work, the typical process for releasing EO-contaminated land back to the community hasn’t changed much over the past 50 years. First a nontechnical survey is conducted where personnel go out to talk with local people about which areas are suspected of being contaminated. Next comes the technical survey, in which personnel use metal detectors, trained dogs, mechanical demining machines, and geophysical methods to identify all the hazards within a mined area. This process is slow, risky, and prone to false positives triggered by cans, screws, or other metal detritus. Once the crew has identified all the potential hazards within an area, a team of explosive-ordnance-disposal specialists either disarm or destroy the explosives. Unexploded ordnance lies by the road in a Ukrainian town near the war’s front lines. John Moore/Getty Images Most deminers would agree that it’s not ideal to identify the EO as they walk through the contaminated area; it would be much better to know the lay of the land before they take their first steps. That’s where drones can be literal lifesavers: They take that first look safely from up above, and they can quickly and cheaply cover a large area. What’s more, the scale of the problem makes artificial intelligence a compelling part of the solution. Imagine if drone imagery was collected for all of Ukraine’s suspected contaminated land: an area of more than 170,000 square kilometers. It takes about 60,000 drone images to cover 1 km 2 at a useful resolution, and we estimate that it takes at minimum 3 minutes for a human expert to analyze a drone image and check for EO. At that rate, it would take more than 500 million person-hours to manually search imagery covering all of Ukraine’s suspected contaminated land for EO. With AI, the task of analyzing this imagery and locating all visible EO in Ukraine will still be a massive endeavor, but it’s within reason. “Today, our technology is being used by several humanitarian nonprofits detecting land mines in Ukraine.” Humanitarian demining groups are slow to adopt new technologies because any mistake, including ones caused by unfamiliarity with new tech, can be fatal. But in the last couple of years, drones seem to have reached an inflection point. Many government agencies and nonprofit groups that work on land-mine detection and removal are beginning to integrate drones into their standard procedures. Besides collecting aerial imagery of large areas with suspected hazards, which helps with route planning, the drones are prioritizing areas of clearance, and in some cases, detecting land mines themselves. After several years of research on this topic during my undergraduate education, in 2020 I cofounded the company now known as Safe Pro AI to push the technology forward and make deployment a reality. My cofounder and I didn’t know at the time that Russia’s full-scale invasion of Ukraine in February 2022 would soon make this work even more vital. How We Got Started With Drones for Demining I became interested in land-mine detection while studying geological science as an undergraduate at Binghamton University, New York. Through my work in the Geophysics and Remote Sensing Laboratory run by Timothy de Smet and Alex Nikulin, I got involved in a project to detect the PFM-1, a Russian-made antipersonnel land mine also known as the butterfly mine due to its unique shape and because it’s typically scattered by aircraft or artillery shells. Afghanistan is still contaminated with many of these mines, left behind more than 40 years ago after the Soviet-Afghan War. They’re particularly problematic because they’re mostly made of plastic, with only a few small metal components; to find them with a metal detector requires turning up the equipment’s sensitivity, which leads to more false positives. In 2019, we trained a machine learning model by scattering inert PFM-1 land mines and collecting visual imagery via drone flights in various environments, including roads, urban areas, grassy fields, and places with taller vegetation. Our resulting model correctly detected 92 percent of PFM-1s in these environments, on average. While we were pleased with its performance, the model could identify only that one type of land mine, and only if they were above ground. Still, this work provided the proof of concept that paved the way for what we’re doing today. In 2020, Steinberg and I founded the Demining Research Community, a nonprofit whose goal is to advance the field of humanitarian mine removal through research in remote sensing, geophysics, and robotics. Over the next few years, we continued to develop our software and make contacts in the field. At the 2021 Mine Action Innovation Conference in Geneva, we heard about a researcher named John Frucci at Oklahoma State University who directs the OSU Global Consortium for Explosive Hazard Mitigation. In the summer of 2022, we spent two weeks with Frucci at OSU’s explosives range, which has more than 50 types of unexploded ordnance. We used our drones to collect visual training data for many different types of explosives: small antipersonnel mines, larger antitank mines, improvised explosive devices, grenades, and many other dangerous explosive things you never want to encounter. Our Software Solution for Demining by Drone To develop our technology for real-world use, Steinberg and I cofounded Safe Pro AI and joined Safe Pro Group, a company that provides drone services and sells protective gear for demining crews. Going into this work, we were aware of many academic proposals for new methods of EO detection that haven’t gotten out of the lab. We wanted to break that paradigm, so we spent a lot of time talking with demining personnel about their needs. Safe Pro Group’s director of operations in Ukraine, Fred Polk, spent more than 200 days last year talking to deminers in Ukraine about the problems they face and the solutions they’d like to see. In light of those conversations, we developed a user-friendly Web application called SpotlightAI. Any authorized person can log on to the website and upload their imagery from a commercial off-the-shelf drone; our system will then run the visual data through our AI model and return a map with all the coordinates of the detected explosive ordnance. We don’t anticipate that the technology will replace human labor—personnel will still have to go through fields with metal detectors to be sure the drones haven’t missed anything. But the drones can speed up the process of the initial nontechnical survey and can also help demining operators figure out which areas to prioritize. The drone-based maps can also give personnel more situational awareness going into an inherently dangerous situation. “Drones can be literal lifesavers: They take the first look at a minefield safely from up above.” The first big test of our technology was in 2022 in Budapest at a Hungarian Explosive Ordnance Disposal test range. At that time, I was at Mount Okmok, a volcano in Alaska’s Aleutian Islands, doing field work on volcanology for my Ph.D., so Steinberg represented Safe Pro AI at that event. He told me via satellite phone that our model detected 20 of the 23 pieces of ordnance, returning the results in under an hour. After Budapest we made two trips to Ukraine, first to field-test our technology in a real-world minefield environment and then for the 2023 U.N. demonstration previously described. In another trip this past March, we visited minefields in eastern Ukraine that are currently being demined by nonprofit organizations using our SpotlightAI system. We were accompanied by Artem Motorniuk, a Ukrainian software developer who joined Safe Pro Group in 2023. It was incredibly saddening to see the destruction of communities firsthand: Even after the front line has moved, explosive remnants of war still hinder reconstruction. Many people flee, but the ones who stay are faced with difficult decisions. They must balance essential activities such as farming and rebuilding with the risks posed by pursing those activities in areas that might have land mines and explosive ordnance. Seeing the demining operations firsthand reinforced the impact of the work, and listening to the demining operators’ feedback in the field helped us further refine the technology. 4 Ways to Sense Danger We’ve continued to improve the performance of our model, and it has finally reached a point where it’s almost as good as an expert human in detecting EO on the surface from visual imagery, while performing this task many times faster than any human could. Sometimes it even finds items that are heavily obscured by vegetation. To give it superhuman capabilities to peer under the dirt, we need to bring in other detection modalities. For example, while we originally rejected thermal imaging as a stand-alone detection method, we’re now experimenting with using it in conjunction with visual imaging. The visual--imagery-based machine learning model returns the detection results, but we then add a thermal overlay that can reveal other information—for example, it might show a ground disturbance that suggests a buried object. The biggest challenge we’re grappling with now is how to detect EO through thick and high vegetation. One strategy I developed is to use the drone imagery to create a 3D map, which is used to estimate the vegetation height and coverage. An algorithm then converts those estimates into a heat map showing how likely it is that the machine learning model can detect EO in each area: For example, it might show a 95 percent detection rate in a flat area with low grass, and only a 5 percent detection rate in a region with trees and bushes. While this approach doesn’t solve the problem posed by vegetation, it gives deminers more context for our results. We’re also incorporating more vegetation imagery into our training data itself to improve the model’s detection rate in such situations. To offer these services in a scalable way, Safe Pro AI has partnered with Amazon Web Services, which is providing computational resources to deal with large amounts of visual imagery uploaded to SpotlightAI. Drone-based land-mine detection in Ukraine is a problem of scale. An average drone pilot can collect more than 30 hectares (75 acres) of imagery per day, roughly equal to 20,000 images. Each one of these images covers an area of 10 by 20 meters, within which the system must detect a land mine the size of your hand and the color of grass. AWS allows us to utilize extremely powerful computers on demand to process thousands of images a day through our machine learning model to meet the needs of deminers in Ukraine. What’s Next for Our Humanitarian Demining Work One obvious way we could improve our technology is by enabling it to detect buried EO, either by visually detecting disturbed earth or using geophysical sensors. In the summer of 2023, our nonprofit experimented with putting ground-penetrating radar, aerial magnetometry, lidar, and thermal sensors on our drones in an attempt to locate buried items. We found that lidar is useful for detecting trenches that are indicative of ground disturbance, but it can’t detect the buried objects themselves. Thermal imagery can be useful if a buried metal item has a very different thermal signature than the surrounding soil, but we typically see a strong differential only in certain environments and at certain times of day. Magnetometers are the best tools for detecting buried metal targets—they’re the most similar to handheld metal detectors that deminers use. But the magnetic signal gets weaker as the drone gets farther from the ground, decreasing at an exponential rate. So if a drone flies too high, it won’t see the magnetic signatures and won’t detect the objects; but if it flies too low, it may have to navigate through bushes or other terrain obstacles. We’re continuing to experiment with these modalities to develop an intelligent sensor-fusion method to detect as many targets as possible. Right now, SpotlightAI can detect and identify more than 150 types of EO, and it’s also pretty good at generalization—if it encounters a type of land mine it never saw in its training data, it’s likely to identify it as something worthy of attention. It’s familiar with almost all American and Russian munitions, as well as some Israeli and Italian types, and we can make the model more robust by training it on ordnance from elsewhere. As our company grows, we may want to fine-tune our algorithms to offer more customized solutions for different parts of the world. Our current model is optimized for Ukraine and the types of EO found there, but many other countries are still dealing with contamination. Maybe we’ll eventually have separate models for places such as Angola, Iraq, and Laos. Our hope is that in the next few years, our technology will become part of the standard procedure for demining teams—we want every team to have a drone that maps out surface contamination before anyone sets foot into a minefield. We hope we can make the world safer for these teams, and significantly speed up the pace of releasing land back to the communities living with remnants of war. The best possible outcome will be if someday our services are no longer needed, because explosive devices are no longer scattered across fields and roads. In the meantime, we’ll work every day to put ourselves out of business. This article appears in the May 2024 print issue.

  • Why One Man Spent 12 Years Fighting Robocalls
    by Michael Koziol on 24. April 2024. at 16:00

    At some point, our phone habits changed. It used to be that if the phone rang, you answered it. With the advent of caller ID, you’d only pick up if it was someone you recognized. And now, with spoofing and robocalls, it can seem like a gamble to pick up the phone, period. In 2023, robocall blocking service Youmail estimates there were more than 55 billion robocalls in the United States. How did robocalls proliferate so much that now they seem to be dominating phone networks? And can any of this be undone? IEEE Spectrum spoke with David Frankel of ZipDX, who’s been fighting robocalls for over a decade, to find out. David Frankel is the founder of ZipDX, a company that provides audioconferencing solutions. He also created the Rraptor automated robocall surveillance system. How did you get involved in trying to stop robocalls? David Frankel: Twelve years ago, I was working in telecommunications and a friend of mine called me about a contest that the Federal Trade Commission (FTC) was starting. They were seeking the public’s help to find solutions to the robocall problem. I spent time and energy putting together a contest entry. I didn’t win, but I became so engrossed in the problem, and like a dog with a bone, I just haven’t let go of it. How can we successfully combat robocalls? Frankel: Well, I don’t know the answer, because I don’t feel like we’ve succeeded yet. I’ve been very involved in something called traceback—in fact, it was my FTC contest entry. It’s a semiautomated process where, in fact, with the cooperation of individual phone companies, you go from telco A to B to C to D, until you ultimately get somebody that sent that call. And then you can find the customer who paid them to put this call on the network. I’ve got a second tool—a robocall surveillance network. We’ve got tens of thousands of telephone numbers that just wait for robocalls. We can correlate that with other data and reveal where these calls are coming from. Ideally, we stop them at the source. It’s a sort of sewage that’s being pumped into the telephone network. We want to go upstream to find the source of the sewage and deal with it there. Can more regulation help? Frankel: Well, regulations are really, really tough for a couple of reasons. One is, it’s a bureaucratic, slow-moving process. It’s also a cat-and-mouse game, because, as quick as you start talking about new regulations, people start talking about how to circumvent them. There’s also this notion of regulatory capture. At the Federal Communications Committee, the loudest voices come from the telecommunications operators. There’s an imbalance in the control that the consumer ultimately has over who gets to invade their telephone versus these other interests. Is the robocall situation getting better or worse? Frankel: It’s been fairly steady state. I’m just disappointed that it’s not substantially reduced from where it’s been. We made progress on explicit fraud calls, but we still have too many of these lead-generation calls. We need to get this whacked down by 80 percent. I always think that we’re on the cusp of doing that, that this year is going to be the year. There are people attacking this from a number of different angles. Everybody says there’s no silver bullet, and I believe that, but I hope that we’re about to crest the hill. Is this a fight that’s ultimately winnable? Frankel: I think we’ll be able to take back our phone network. I’d love to retire, having something to show for our efforts. I don’t think we’ll get it to zero. But I think that we’ll be able to push the genie a long way back into the bottle. The measure of success is that we all won’t be scared to answer our phone. It’ll be a surprise that it’s a robocall—instead of the expectation that it’s a robocall. This article appears in the May 2024 issue as “5 Questions for David Frankel.”

  • Get to Know the IEEE Board of Directors
    by IEEE on 23. April 2024. at 15:42

    The IEEE Board of Directors shapes the future direction of IEEE and is committed to ensuring IEEE remains a strong and vibrant organization—serving the needs of its members and the engineering and technology community worldwide—while fulfilling the IEEE mission of advancing technology for the benefit of humanity. This article features IEEE Board of Directors members Sergio Benedetto, Jenifer Castillo, and Fred Schindler. IEEE Life Fellow Sergio Benedetto Vice President, Publication Services and Products Sergio Bregni Benedetto is a professor emeritus at Politecnico di Torino, in Turin, Italy. His research in digital communications has contributed to the theory of error-correcting codes, which yield performance close to the Shannon theory limits and, according to Benedetto, explain “the astonishing performance” of turbo codes. Benedetto has collaborated with the European Space Agency and the NASA Jet Propulsion Laboratory to design codes that are now standard for satellite communications. As an active member of the IEEE Communications Society and in positions he has held related to IEEE publications for more than 20 years, Benedetto has seen IEEE’s most invaluable asset at work—great scientists in the IEEE field of interests willing to serve their community as volunteers. Benedetto has been active in digital communications for more than 40 years. He has coauthored five books and more than 250 papers. His publications have received more than 20,000 citations. He is an IEEE Life Fellow and a member of the Academy of Science of Turin. He has received numerous international awards throughout his career, including the 2008 IEEE Communications Society Edwin Howard Armstrong Award. IEEE Senior Member Jenifer Castillo Director, Region 9: Latin America Lufthansa Technik Castillo is a sales and key account manager in Puerto Rico for a leading maintenance, repair, and overhaul services provider in the aviation industry. In her role, she heads projects, including negotiating and executing contracts, and considers customers’ needs while prioritizing aviation industry safety. Or, as Castillo likes to say, she “plays with planes on the beautiful island of Puerto Rico.” A member of the IEEE Aerospace and Electronic Systems Society and IEEE Industrial Electronics Society, Castillo has been an active IEEE volunteer for many years. She was the first Latina to chair the IEEE Women in Engineering committee, bringing a different perspective to the organization. During her 2021-2022 term, she introduced several benefits, including two awards and an international scholarship, while nurturing a global volunteer network supporting women’s advancement in science, technology, engineering and mathematics fields. Castillo helped found IEEE MOVE Puerto Rico, which is a portable version of the IEEE-USA MOVE program that provides communities affected by natural disasters with power and Internet access in areas. The disaster response during hurricane Maria, supported by the IEEE Foundation, was the turning point for the local sections to promote this initiative that enabled volunteers to support the Red Cross’ response and recovery efforts. Castillo has been a member of the IEEE Industry Engagement Committee and chair of the IEEE Puerto Rico and Caribbean Section. In 2020, she was recognized with the IEEE Member and Geographic Activities Achievement Award for “sustained and outstanding achievements in promoting students, IEEE Young Professionals, and IEEE WIE membership development in Latin America and the Caribbean.” She was honored with the IEEE Region 9 Oscar C. Fernández Outstanding Volunteer Award in 2020. IEEE Life Fellow Manfred “Fred” Schindler Vice President, Technical Activities Lyle Photos Schindler has spent his career in industry working on RF, microwave, and millimeter-wave semiconductors. He has led the development of advanced RF semiconductor products for commercial and defense applications. “Taking a technology from the lab and seeing it through to high-volume production is rewarding,” he says, “especially knowing that virtually everyone carries a device using the technologies we developed.” A member of the IEEE Microwave Theory and Technology Society (IEEE MTT-S), Schindler served as its president in 2003. He also has served as chair of both the IEEE Conferences Committee and the IEEE International Microwave Conference. As vice president of Technical Activities, he is working to overcome structural barriers among established communities to ensure IEEE’s future stability and success. Schindler holds 11 patents and has published more than 40 technical articles. He founded the IEEE Radio and Wireless Symposium, and has contributed a column on microwave business to IEEE Microwave Magazine since 2011. He received the 2018 IEEE MTT-S Distinguished Service Award for his efforts benefiting the society and the microwave profession.

  • Robert Kahn: The Great Interconnector
    by Tekla S. Perry on 20. April 2024. at 15:00

    In the mid-1960s, Robert Kahn began thinking about how computers with different operating systems could talk to each other across a network. He didn’t think much about what they would say to one another, though. He was a theoretical guy, on leave from the faculty of the Massachusetts Institute of Technology for a stint at the nearby research-and-development company Bolt, Beranek and Newman (BBN). He simply found the problem interesting. “The advice I was given was that it would be a bad thing to work on. They would say it wasn’t going to lead to anything,” Kahn recalls. “But I was a little headstrong at the time, and I just wanted to work on it.” Robert E. Kahn Current job: Chairman, CEO, and president of the Corporation for National Research Initiatives (CNRI) Date of birth: 23 December 1938 Birthplace: Brooklyn, New York Family: Patrice Ann Lyons, his wife Education: BEE 1960, City College of New York; M.A. 1962 and Ph.D. 1964, Princeton University First job: Runner for a Wall Street brokerage firm First electronics job: Bell Telephone Laboratories, New York City Biggest surprise in career: Leaving—and then staying out of—academics Patents: Several, including two related to the digital-object architecture and two on remote pointing devices Heroes: His parents, his wife, Egon Brenner, Irwin Jacobs, Jack Wozencraft Favorite books: March of Folly: From Troy to Vietnam (1984) by Barbara W. Tuchman, The Two-Ocean War: A Short History of the United States Navy in the Second World War (1963) by Samuel Eliot Morison Favorite movies: The Day the Earth Stood Still (1951), Casablanca (1942) Favorite kind of music: Opera, operatic musicals Favorite TV shows: Golf, tennis, football, soccer—basically any sports show Favorite food: Chinese that he cooks himself, as taught to him by Franklin Kuo, codeveloper of ALOHAnet at the University of Hawaii Favorite restaurants: Le Bernardin, New York City, and L’Auberge Chez Francois, Great Falls, Va. Leisure activities past and present: Skiing, whitewater canoeing, tennis, golf, cooking Key organizational memberships: IEEE, Association for Computing Machinery (ACM), the U.S. National Academies of Science and Engineering, the Marconi Society Major awards: IEEE Medal of Honor “for pioneering technical and leadership contributions in packet communication technologies and foundations of the Internet,” the Presidential Medal of Freedom, the National Medal of Technology and Innovation, the Queen Elizabeth Prize for Engineering, the Japan Prize, the Prince of Asturias Award Kahn ended up “working on it” for the next half century. And he is still involved in networking research today. It is for this work on packet communication technologies—as part of the project that became the ARPANET and in the foundations of the Internet—that Kahn is being awarded the 2024 IEEE Medal of Honor. The ARPANET Is Born Kahn wasn’t the only one thinking about connecting disparate computers in the 1960s. In 1965, Larry Roberts, then at the MIT Lincoln Laboratory, connected one computer in Massachusetts to another in California over a telephone line. Bob Taylor, then at the Advanced Research Projects Agency (ARPA), got interested in connecting computers, in part to save the organization money by getting the expensive computers it funded at universities and research organizations to share their resources over a packet-switched network. This method of communications involves cutting up data files into blocks and reassembling them at their destination. It allows each fragment to take a variety of paths across a network and helps mitigate any loss of data, because individual packets can easily be resent. Taylor’s project—the ARPANET—would be far more than theoretical. It would ultimately produce the world’s first operational packet network linking distributed interactive computers. Meanwhile, over at BBN, Kahn intended to spend a couple of years in industry so he could return to academia with some real-world experience and ideas for future research. “I wasn’t hired to do anything in particular,” Kahn says. “They were just accumulating people who they thought could contribute. But I had come from the conceptual side of the world. The people at BBN viewed me as other.” Kahn didn’t know much about computers at the time—his Ph.D. thesis involved signal processing. But he did know something about communication networks. After earning a bachelor’s degree in electrical engineering from City College of New York in 1960, Kahn had joined Bell Telephone Laboratories, working at its headquarters in Manhattan, where he helped to analyze the overall architecture and performance of the Bell telephone system. That involved conceptualizing what the network needed to do, developing overall plans, and handling the mathematical calculations related to the architecture as implemented, Kahn recalls. “We would figure out things like: Do we need more lines between Denver and Chicago?” he says. Kahn stayed at Bell Labs for about nine months; to his surprise, a graduate fellowship came through that he decided to accept. He was off to Princeton University in the autumn of 1961, returning to Bell Labs for the next few summers. So, when Kahn was at BBN a few years later, he knew enough to realize that you wouldn’t want to use the telephone network as the basis of a computer network: Dial-up connections took 10 or 20 seconds to go through, the bandwidth was low, the error rate was high, and you could connect to only one machine at a time. Other than generally thinking that it would be nice if computers could talk to one another, Kahn didn’t give much thought to applications. “If you were engineering the Bell System,” he says, “you weren’t trying to figure out who in San Francisco is going to say what to whom in New York. You were just trying to figure out how to enable conversations.” Bob Kahn graduated from high school in 1955.Bob Kahn Kahn wrote a series of reports laying out how he thought a network of computers could be implemented. They landed on the desk of Jerry Elkind, a BBN vice president who later joined Xerox PARC. And Elkind told Kahn about ARPA’s interest in computer networking. “I didn’t really know what ARPA was, other than I had seen the name,” Kahn says. Elkind told him to send his reports to Larry Roberts, the recently hired program manager for ARPA’s networking project. “The next thing I know,” Kahn says, “there’s an RFQ [request for quotation] from ARPA for building a four-node net.” Kahn, still the consummate academic, hadn’t thought he’d have to do much beyond putting his thoughts down on paper. “It never dawned on me that I’d actually get involved in building it,” he says. Kahn handled the technical portion of BBN’s proposal, and ARPA awarded BBN the four-node-network contract in January of 1969. The nodes rolled out later that year: at UCLA in September; the Stanford Research Institute (SRI) in October; the University of California, Santa Barbara, in November; and the University of Utah in December. Kahn postponed his planned return to MIT and continued to work on expanding this network. In October 1972, the ARPANET was publicly unveiled at the first meeting of the International Conference on Computer Communications, in Washington, D.C. “I was pretty sure it would work,” Kahn says, “but it was a big event. There were 30 or 40 nodes on the ARPANET at the time. We put 40 different kinds of terminals in the [Washington Hilton] ballroom, and people could walk around and try this terminal, that terminal, which might connect to MIT, and so forth. You could use Doug Engelbart’s NLS [oN-Line System] at SRI and manipulate a document, or you could go onto a BBN computer that demonstrated air-traffic control, showing an airplane leaving one airport, which happened to be on a computer in one place, and landing at another airport, which happened to be on a computer in another place.” The demos, he recalled, ran 24 hours a day for nearly a week. The reaction, he says, “was ‘Oh my God, this is amazing’ for everybody, even people who worried about how it would affect their businesses.” Goodbye BBN, Hello DARPA Kahn officially left BBN the day after the demo concluded to join DARPA (the agency having recently added the word “Defense” to its name). He felt he’d done what he could on networking and was ready for a new challenge. “They hired me to run a hundred-million-dollar program on automated manufacturing. It was an opportunity of a lifetime, to get on the factory floor, to figure out how to distribute processing, distribute artificial intelligence, use distributed sensors.” Bob Kahn served on the MIT faculty from 1964 to 1966.Bob Kahn Soon after he arrived at DARPA, Congress pulled the plug on funding for the proposed automated-manufacturing effort. Kahn shrugged his shoulders and figured he’d go back to MIT. But Roberts asked Kahn to stay. Kahn did, but rather than work on ARPANET he focused on developing packet radio, packet satellite, and even, he says, packetizing voice, a technology that led to VoIP (Voice over Internet Protocol) today. Getting those new networks up and running wasn’t always easy. Irwin Jacobs, who had just cofounded Linkabit and later cofounded Qualcomm, worked on the project. He recalls traveling through Europe with Kahn, trying to convince organizations to become part of the network. “We visited three PTTs [postal, telegraph, and telephone services],” Jacobs said, “in Germany, in France, and in the U.K. The reactions were all the same. They were very friendly, they gave us the morning to explain packet switching and what we were thinking of doing, then they would serve us lunch and throw us out.” But the two of them kept at it. “We took a little hike one day,” Jacobs says. “There was a steep trail that went up the side of a fjord, water coming down the opposite side. We came across an old man, casting a line into the stream rushing downhill. He said he was fishing for salmon, and we laughed—what were his chances? But as we walked uphill, he yanked on his rod and pulled out a salmon.” The Americans were impressed with his determination. “You have to have confidence in what you are trying to do,” Jacobs says. “Bob had that. He was able to take rejection and keep persisting.” Ultimately, a government laboratory in Norway, the Norwegian Defence Research Establishment, and a laboratory at University College London came on board—enough to get the satellite network up and running. And Then Came the Internet With the ARPANET, packet-radio, and packet-satellite networks all operational, it was clear to Kahn that the next step would be to connect them. He knew that the ARPANET design all by itself wouldn’t be useful for bringing together these disparate networks. “Number one,” he says, “the original ARPANET protocols required perfect delivery, and if something didn’t get through and you didn’t get acknowledgment, you kept trying until it got through. That’s not going to work if you’re in a noisy environment, if you’re in a tunnel, if you’re behind a mountain, or if somebody’s jamming you. So I wanted something that didn’t require perfect communication.” “Number two,” he continues, “you wanted something that didn’t have to wait for everything in a message to get through before the next message could get through. “And you had no way in the ARPANET protocols for telling a destination what to do with the information when it got there. If a router got a packet and it wasn’t for another node on the ARPANET, it would assume ‘Oh, must be for me.’ It had nowhere else to send it.” Initially, Kahn assigned the network part of the IP addresses himself, keeping a record on a single index card he carried in his shirt pocket. “Vint, as a computer scientist, thought of things in terms of bits and computer programs. As an electrical engineer, I thought about signals and bandwidth and the nondigital side of the world.”—Bob Kahn He approached Vint Cerf, then an assistant professor at Stanford University, who had been involved with Kahn in testing the ARPANET during its development, and he asked him to collaborate. “Vint, as a computer scientist, thought of things in terms of bits and computer programs. As an electrical engineer, I thought about signals and bandwidth and the nondigital side of the world. We brought together different sets of talents,” Kahn says. “Bob came out to Stanford to see me in the spring of 1973 and raised the problem of multiple networks,” Cerf recalls. “He thought they should have a set of rules that allowed them to be autonomous but interact with each other. He called it internetworking.” “He’d already given this serious thought,” Cerf continues. “He wanted SRI to host the operations of the packet-radio network, and he had people in the Norwegian defense-research establishment working on the packet-satellite network. He asked me how we could make it so that a host on any network could communicate with another in a standardized way.” Cerf was in. The two met regularly over the next six months to work on “the internetworking problem.” Between them, they made some half a dozen cross-country trips and also met one-on-one whenever they found themselves attending the same conference. In July 1973, they decided it was time to commit their ideas to paper. “I remember renting a conference room at the Cabana Hyatt in Palo Alto,” Kahn says. The two planned to sequester themselves there in August and write until they were done. Kahn says it took a day; Cerf remembers it as two, or at least a day and a half. In any case, they got it done in short order. Cerf took the first crack at it. “I sat down with my yellow pad of paper,” he says. “And I couldn’t figure out where to start.” “I went out to pay for the conference room,” Kahn says. “When I came back Vint was sitting there with the pencil in his hand—and not a single word on the paper.” Kahn admits that the task wasn’t easy. “If you tried to describe the United States government,” he says, “what would you say first? It’s the buildings, it’s the people, it’s the Constitution. Do you talk about Britain? Do you talk about Indians? Where do you start?” In 1977, President Bill Clinton [right] presented the National Medal of Technology to Bob Kahn [center] and Vint Cerf [left].Bob Kahn Kahn took the pencil from Cerf and started writing. “That’s his style,” Cerf says, “write as much as you can and edit later. I tend to be more organized, to start with an outline.” “I told him to go away,” Kahn says, “and I wrote the first eight or nine pages. When Vint came back, he looked at what I had done and said, ‘Okay, give me the pencil.’ And he wrote the next 20 or 30 pages. And then we went back and forth.” Finally, Cerf walked off with the handwritten version to give to his secretary to type. When she finished, he told her to throw that original draft away. “Historians have been mad at me ever since,” Cerf says. “It might be worth a fortune today,” Kahn muses. The resulting paper, published in the IEEE Transactions on Communications in 1974, represented the basis of the Internet as we now know it. It introduced the Transmission Control Protocol, later separated into two parts and now known as TCP/IP. A New World on an Index Card A key to making this network of networks work was the Internet Protocol (IP) addressing system. Every new host coming onto the network required a new IP address. These numerical labels uniquely identify computers and are used for routing packets to their locations on the network. Initially, Kahn assigned the network part of the IP addresses himself, keeping a record of who had been allotted what set of numbers on a single index card he carried in his shirt pocket. When that card began to fill up in the late ‘70s, he decided it was time to turn over the task to others. It became the responsibility of Jon Postel, and subsequently that of the Internet Assigned Numbers Authority (IANA) at the University of Southern California. IANA today is part of ICANN, the Internet Corporation for Assigned Names and Numbers. Bob Kahn and Vint Cerf visited Yellowstone National Park together in the early 2000s.Bob Kahn Kahn moved up the DARPA ladder, to chief scientist, deputy director, and, in 1979, director of the Information Processing Techniques Office. He stayed in that last role until late 1985. At DARPA, in addition to his networking efforts, he launched the VLSI [very-large-scale integration] Architecture and Design Project and the billion-dollar Strategic Computing Initiative. In 1985, with political winds shifting and government research budgets about to shrink substantially, Kahn left DARPA to form a nonprofit dedicated to fostering research on new infrastructures, including designing and prototyping networks for computing and communications. He established it as the Corporation for National Research Initiatives (CNRI). Kahn reached out to industry for funding, making it clear that, as a nonprofit, CNRI intended to make its research results open to all. Bell Atlantic, Bellcore, Digital Equipment Corp., IBM, MCI, NYNEX, Xerox, and others stepped up with commitments that totaled over a million dollars a year for several years. He also reached out to the U.S. National Science Foundation and received funding to build testbeds to demonstrate technology and applications for computer networks at speeds of at least a gigabit. CNRI also obtained U.S. government funding to create a secretariat for the Internet Activities Board, which eventually led to the establishment of the Internet Engineering Task Force, which has helped evolve Internet protocols and standards. CNRI ran the secretariat for about 18 years. Cerf joined Kahn at CNRI about six months after it started. “We were thinking about applications of the Internet,” Cerf says. “We were interested in digital libraries, as were others.” Kahn and Cerf sought support for such work, and DARPA again came through, funding CNRI to undertake a research effort involving building and linking digital libraries at universities. They also began working on the concept of “Knowbots,” mobile software programs that could collect and store information to be used to handle distributed tasks on a network. As part of that digital library project, Kahn collaborated with Robert Wilensky at the University of California, Berkeley, on a paper called “A Framework for Distributed Digital Object Services,” published in the International Journal on Digital Libraries in 2006. The Digital Object Emerges Out of this work came the idea that today forms the basis of much of Kahn’s current efforts: digital objects, also known as digital entities. A digital object is a sequence of bits, or a set of such sequences, having a unique identifier. A digital object may incorporate a wide variety of information—documents, movies, software programs, wills, and even cryptocurrency. The concept of a digital object, together with distributed repositories, metadata registries, and a decentralized identifier resolution system, form the digital-object architecture. From its identifier, a digital object can be located even if it moves to a different place on the net. Kahn’s collaborator on much of this work is his wife, Patrice Lyons, a copyright and communications lawyer. Initially, CNRI maintained the registry of Digital Object Identifier (DOI) records. Then those came to be kept locally, and CNRI maintained just the registry of prefix records. In 2014, CNRI handed off that responsibility to a newly formed international body, the DONA Foundation in Geneva. Kahn serves as chair of the DONA board. The organization uses multiple distributed administrators to operate prefix registries. One, the International DOI Foundation, has handled close to 100 billion identifiers to date. The DOI system is used by a host of publishers, including IEEE, as well as other organizations to manage their digital assets. A plaque commemorating the ARPANET now stands in front of the Arlington, Va., headquarters of the Defense Advanced Research Projects Agency (DARPA). Bob Kahn Kahn sees this current effort as a logical extension of the work he did on the ARPANET and then the Internet. “It’s all about how we use the Internet to manage information,” he says. Kahn, now 85, works more than five days a week and has no intention of slowing down. The Internet, he says, is still in its startup phase. Why would he step back now? “I once had dinner with [historian and author] David McCullough,” Kahn explains. Referring to the 1974 paper he wrote with Cerf, he says, “I told him that if I were sitting in the audience at a meeting, people wouldn’t say ‘Here’s what the writers of this paper really meant,’ because I would get up and say, ‘Well we wrote that and….’ “ “I asked McCullough, ‘When do you consider the end of the beginning of America?’” After some discussion, McCullough put the date at 4 July 1826, when both John Adams and Thomas Jefferson passed away. Kahn agreed that their deaths marked the end of the country’s startup phase, because Adams and Jefferson never stopped worrying about the country that they helped create. “It was such an important thing that they were doing that their lives were completely embedded in it,” Kahn says. “And the same is true for me and the Internet.” This article appears in the May 2024 print issue as “The Great Interconnector.”

  • Video Friday: SpaceHopper
    by Evan Ackerman on 19. April 2024. at 16:07

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboCup German Open: 17–21 April 2024, KASSEL, GERMANY AUVSI XPONENTIAL 2024: 22–25 April 2024, SAN DIEGO Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE ICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS Enjoy today’s videos! In the SpaceHopper project, students at ETH Zurich developed a robot capable of moving in low gravity environments through hopping motions. It is intended to be used in future space missions to explore small celestial bodies. The exploration of asteroids and moons could provide insights into the formation of the universe, and they may contain valuable minerals that humanity could use in the future.The project began in 2021 as an ETH focus project for bachelor’s students. Now, it is being continued as a regular research project. A particular challenge in developing exploration robots for asteroids is that, unlike larger celestial bodies like Earth, there is low gravity on asteroids and moons. The students have therefore tested their robot’s functionality in zero gravity during a parabolic flight. The parabolic flight was conducted in collaboration with the European Space Agency as part of the ESA Academy Experiments Programme. [ SpaceHopper ] It’s still kind of wild to me that it’s now possible to just build a robot like Menteebot. Having said that, at present it looks to be a fairly long way from being able to usefully do tasks in a reliable way. [ Menteebot ] Look, it’s the robot we all actually want! [ Github ] I wasn’t quite sure what made this building especially “robot-friendly” until I saw the DEDICATED ROBOT ELEVATOR. [ NAVER ] We are glad to announce the latest updates with our humanoid robot CL-1. In the test, it demonstrates stair climbing in a single stride based on real-time terrain perception. For the very first time, CL-1 accomplishes back and forth running, in a stable and dynamic way! [ LimX Dynamics ] EEWOC [Extended-reach Enhanced Wheeled Orb for Climbing] uses a unique locomotion scheme to climb complex steel structures with its magnetic grippers. Its lightweight and highly extendable tape spring limb can reach over 1.2 meters, allowing it to traverse gaps and obstacles much larger than other existing climbing robots. Its ability to bend allows it to reach around corners and over ledges, and it can transition between surfaces easily thanks to assistance from its wheels. The wheels also let it to drive more quickly and efficiently on the ground. These features make EEWOC well-suited for climbing the complex steel structures seen in real-world environments. [ Paper ] Thanks to its “buttock-contact sensors,” JSK’s musculoskeletal humanoid has mastered(ish) the chair-scoot. [ University of Tokyo ] Thanks, Kento! Physical therapy seems like a great application for a humaonid robot when you don’t really need that humanoid robot to do much of anything. [ Fourier Intelligence ] NASA’s Ingenuity Mars helicopter became the first vehicle to achieve powered, controlled flight on another planet when it took to the Martian skies on 19 April 2021. This video maps the location of the 72 flights that the helicopter took over the course of nearly three years. Ingenuity far surpassed expectations—soaring higher and faster than previously imagined. [ JPL ] No thank you! [ Paper ] MERL introduces a new autonomous robotic assembly technology, offering an initial glimpse into how robots will work in future factories. Unlike conventional approaches where humans set pre-conditions for assembly, our technology empowers robots to adapt to diverse scenarios. We showcase the autonomous assembly of a gear box that was demonstrated live at CES2024. [ Mitsubishi ] Thanks, Devesh! In November, 2023 Digit was deployed in a distribution center unloading totes from an AMR as part of regular facility operations, including a shift during Cyber Monday. [ Agility ] The PR2 just refuses to die. Last time I checked, official support for it ceased in 2016! [ University of Bremen ] DARPA’s Air Combat Evolution (ACE) program has achieved the first-ever in-air tests of AI algorithms autonomously flying a fighter jet against a human-piloted fighter jet in within-visual-range combat scenarios (sometimes referred to as “dogfighting”).In this video, team members discuss what makes the ACE program unlike other aerospace autonomy projects and how it represents a transformational moment in aerospace history, establishing a foundation for ethical, trusted, human-machine teaming for complex military and civilian applications. [ DARPA ] Sometimes robots that exist for one single purpose that they only do moderately successfully while trying really hard are the best of robots. [ CMU ]

  • Empower Your Supply Chain
    by Xometry on 19. April 2024. at 14:03

    Xometry’s essential guide reveals the transformative power of artificial intelligence in supply chain optimisation. It lifts the lid on how machine learning, natural language processing, and big data, can streamline procurement and enhance operations efficiency. The guide showcases applications across various sectors such as healthcare, construction, retail, and more, offering actionable insights and strategies. Readers will explore the workings of AI technologies, their implementation in manufacturing, and future trends in supply chain management, making it a valuable resource for professionals aiming to harness AI’s potential to innovate and optimise their supply chain processes. Download this free whitepaper now!

  • 50 Years Later, This Apollo-Era Antenna Still Talks to Voyager 2
    by Willie D. Jones on 18. April 2024. at 18:00

    For more than 50 years, Deep Space Station 43 has been an invaluable tool for space probes as they explore our solar system and push into the beyond. The DSS-43 radio antenna, located at the Canberra Deep Space Communication Complex, near Canberra, Australia, keeps open the line of communication between humans and probes during NASA missions. Today more than 40 percent of all data retrieved by celestial explorers, including Voyagers, New Horizons, and the Mars Curiosity rover, comes through DSS-43. “As Australia’s largest antenna, DSS-43 has provided two-way communication with dozens of robotic spacecraft,” IEEE President-Elect Kathleen Kramer said during a ceremony where the antenna was recognized as an IEEE Milestone. It has supported missions, Kramer noted, “from the Apollo program and NASA’s Mars exploration rovers such as Spirit and Opportunity to the Voyagers’ grand tour of the solar system. “In fact,” she said, “it is the only antenna remaining on Earth capable of communicating with Voyager 2.” Why NASA needed DSS-43 Maintaining two-way contact with spacecraft hurtling billions of kilometers away across the solar system is no mean feat. Researchers at NASA’s Jet Propulsion Laboratory, in Pasadena, Calif., knew that communication with distant space probes would require a dish antenna with unprecedented accuracy. In 1964 they built DSS-42—DSS-43’s predecessor—to support NASA’s Mariner 4 spacecraft as it performed the first-ever successful flyby of Mars in July 1965. The antenna had a 26-meter-diameter dish. Along with two other antennas at JPL and in Spain, DSS-42 obtained the first close-up images of Mars. DSS-42 was retired in 2000. NASA engineers predicted that to carry out missions beyond Mars, the space agency needed more sensitive antennas. So in 1969 they began work on DSS-43, which has a 64-meter-diameter dish. DSS-43 was brought online in December 1972—just in time to receive video and audio transmissions sent by Apollo 17 from the surface of the moon. It had greater reach and sensitivity than DSS-42 even after 42’s dish was upgraded in the early 1980s. The gap between the two antennas’ capabilities widened in 1987, when DSS-43 was equipped with a 70-meter dish in anticipation of Voyager 2’s 1989 encounter with the planet Neptune. DSS-43 has been indispensable in maintaining contact with the deep-space probe ever since. The dish’s size isn’t its only remarkable feature. The dish’s manufacturer took great pains to ensure that its surface had no bumps or rough spots. The smoother the dish surface, the better it is at focusing incident waves onto the signal detector so there’s a higher signal-to-noise ratio. DSS-43 boasts a pointing accuracy of 0.005 degrees (18 arc seconds)—which is important for ensuring that it is pointed directly at the receiver on a distant spacecraft. Voyager 2 broadcasts using a 23-watt radio. But by the time the signals traverse the multibillion-kilometer distance from the heliopause to Earth, their power has faded to a level 20 billion times weaker than what is needed to run a digital watch. Capturing every bit of the incident signals is crucial to gathering useful information from the transmissions. The antenna has a transmitter capable of 400 kilowatts, with a beam width of 0.0038 degrees. Without the 1987 upgrade, signals sent from DSS-43 to a spacecraft venturing outside the solar system likely never would reach their target. NASA’s Deep Space Network The Canberra Deep Space Complex, where DSS-43 resides, is one of three such tracking stations operated by JPL. The other two are DSS-11 at the Goldstone Deep Space Communications Complex near Barstow, Calif., and DSS-63 at the Madrid Deep Space Communications Complex in Robledo de Chavela, Spain. Together, the facilities make up the Deep Space Network, which is the most sensitive scientific telecommunications system on the planet, according to NASA. At any given time, the network is tracking dozens of spacecraft carrying out scientific missions. The three facilities are spaced about 120 degrees longitude apart. The strategic placement ensures that as the Earth rotates, at least one of the antennas has a line of sight to an object being tracked, at least for those close to the plane of the solar system. But DSS-43 is the only member of the trio that can maintain contact with Voyager 2. Ever since its flyby of Neptune’s moon Triton in 1989, Voyager 2 has been on a trajectory below the plane of the planets, so that it no longer has a line of sight with any radio antennas in the Earth’s Northern Hemisphere. To ensure that DSS-43 can still place the longest of long-distance calls, the antenna underwent a round of updates in 2020. A new X-band cone was installed. DSS-43 transmits radio signals in the X (8 to 12 gigahertz) and S (2 to 4 GHz) bands; it can receive signals in the X, S, L (1 to 2 GHz), and K (12 to 40 GHz) bands. The dish’s pointing accuracy also was tested and recertified. Once the updates were completed, test commands were sent to Voyager 2. After about 37 hours, DSS-43 received a response from the space probe confirming it had received the call, and it executed the test commands with no issues. DSS-43 is still relaying signals between Earth and Voyager 2, which passed the heliopause in 2018 and is now some 20 billion km from Earth. [From left] IEEE Region 10 director Lance Fung, Kevin Furguson, IEEE President-Elect Kathleen Kramer, and Ambarish Natu, past chair of the IEEE Australian Capital Territory Section at the IEEE Milestone dedication ceremony held at the Canberra Deep Space Communication Complex in Australia. Furguson is the director of the complex.Ambarish Natu Other important missions DSS-43 has played a vital role in missions closer to Earth as well, including NASA’s Mars Science Laboratory mission. When the space agency sent Curiosity, a golf cart–size rover, to explore the Gale crater and Mount Sharp on Mars in 2011, DSS-43 tracked Curiosity as it made its nail-biting seven-minute descent into Mars’s atmosphere. It took roughly 20 minutes for radio signals to traverse the 320-million km distance between Mars and Earth, and then DSS-43 delivered the good news: The rover had landed safely and was operational. “NASA plans to send future generations of astronauts from the Moon to Mars, and DSS-43 will play an important role as part of NASA’s Deep Space Network,” says Ambarish Natu, an IEEE senior member who is a past chair of the IEEE Australian Capital Territory (ACT) Section. DSS-43 was honored with an IEEE Milestone in March during a ceremony held at the Canberra Deep Space Communication Complex. “This is the second IEEE Milestone recognition given in Australia, and the first for ACT,” Lance Fung, IEEE Region 10 director, said during the ceremony. A plaque recognizing the technology is now displayed at the complex. It reads: First operational in 1972 and later upgraded in 1987, Deep Space Station 43 (DSS-43) is a steerable parabolic antenna that supported the Apollo 17 lunar mission, Viking Mars landers, Pioneer and Mariner planetary probes, and Voyager’s encounters with Jupiter, Saturn, Uranus, and Neptune. Planning for many robotic and human missions to explore the solar system and beyond has included DSS-43 for critical communications and tracking in NASA’s Deep Space Network. Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world. The IEEE Australian Capital Territory Section sponsored the nomination.

  • 50 by 20: Wireless EV Charging Hits Key Benchmark
    by Willie D. Jones on 18. April 2024. at 12:00

    Researchers at Oak Ridge National Laboratory in Tennessee recently announced that they have set a record for wireless EV charging. Their system’s magnetic coils have reached a 100-kilowatt power level. In tests in their lab, the researchers reported their system’s transmitter supplied enough energy to a receiver mounted on the underside of a Hyundai Kona EV to boost the state of charge in the car’s battery by 50 percent (enough for about 150 kilometers of range) in less than 20 minutes. “Impressive,” says Duc Minh Nguyen, a research associate in the Communication Theory Lab at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia. Nguyen is the lead author of several of papers on dynamic wireless charging, including some published when he was working toward his PhD at KAUST. In 15 minutes, “the batteries could take on enough energy to drive for another two-and-a-half or three hours—just in time for another pit stop.”–Omer Onar, Oak Ridge National Laboratory The Oak Ridge announcement marks the latest milestone in work on wireless charging that stretches back more than a decade. As IEEE Spectrum reported in 2018, WiTricity, headquartered in Watertown, Mass., had announced a partnership with an unspecified automaker to install wireless charging receivers on its EVs. Then in 2021, the company revealed that it was working with Hyundai to outfit some of its Genesis GV60 EVs with Wireless charging. (In early 2023, Car Buzz reported that it had sniffed out paperwork pointing to Hyundai’s plans to equip its Ionic 5 EV with wireless charging capability.) The plan, said WiTricity, was to equip EVs with magnetic resonance charging capability so that if such a vehicle were parked over a static charging pad installed in, say, the driver’s garage, the battery would reach full charge overnight. By 2020, we noted, a partnership had been worked out between Jaguar, Momentum Dynamics, Nordic taxi operator Cabonline, and charging company Fortam Recharge. That group set out to outfit 25 Jaguar I-Pace electric SUVs with Momentum Dynamics’ inductive charging receivers. The receivers and transmitters, rated at 50 to 75 kilowatts, were designed so that any of the specially equipped taxis would receive enough energy for 80 kilometers of range by spending 15 minutes above the energized coils embedded in the pavement as the vehicle works its way through a taxi queue. Now, according to Oak Ridge, roughly the same amount of charging time will yield about 1.5 times that range. The Oak Ridge research team admits that installing wireless charging pads is expensive, but they say dynamic and static wireless charging can play an important role in expanding the EV charging infrastructure. This magnetic resonance transmitter pad can wirelessly charge an EV outfitted with a corresponding receiver.Oak Ridge National Laboratory Omad Onar, an R&D staffer in the Power Electronics and Electric Machinery Group at Oak Ridge and a member of the team that developed the newest version of the wireless charging system, envisions the static versions of these wireless charging systems being useful even for extended drives on highways. He imagines them being placed under a section of specially marked parking spaces that allow drivers to pull up and start charging without plugging in. “The usual routine—fueling up, using the restroom, and grabbing coffee or a snack usually takes about 15 minutes or more. In that amount of time, the batteries could take on enough energy to drive for another two-and-a-half or three hours—just in time for another pit stop.” What’s more, says Onar, he and his colleagues are still working to refine the system so it will transfer energy more efficiently than the one-off prototype they built in their lab. Meanwhile, Israeli company Electreon has already installed electrified roads for pilot projects in Sweden, Norway, Italy, and other European countries, and has plans for similar projects in the United States. The company found that by installing a stationary wireless charging spot at one terminal end of a bus route near Tel Aviv University (its first real-world project), electric buses operating on that route were able to ferry passengers back and forth using batteries with one-tenth the storage capacity that was previously deemed necessary. Smaller batteries mean cheaper vehicles. What’s more, says Nguyen, charging a battery in short bursts throughout the day instead of depleting it and filling it with up with, say, an hour-long charge at a supercharging station extends the battery’s life.

  • High-Performance Data, Signal, and Power Solutions for the Most Advanced Vehicles
    by TE Connectivity on 17. April 2024. at 19:54

    This sponsored article is brought to you by TE Automotive. Staying ahead of the curve in the ever-changing automotive landscape — no matter the vehicle powertrain — requires reliable, precision-engineered connectivity solutions and a trusted engineering partner you can count on. TE Connectivity (TE) is a trailblazer in automotive connectivity solutions, with customer-centric engineering, personalized sales support, and a comprehensive distribution network that provides unmatched speed-to-market. From concept to design, we leverage our decades of expertise and industry know-how to support you with the industry’s most comprehensive portfolio of data, signal, and power automotive connectivity solutions. Our solutions can be found in nearly every vehicle — making TE your go-to, complete connectivity partner for the most advanced vehicle architectures of today and tomorrow. Explore TE’s innovative automotive solutions, or connect with us today to discuss how to solve your specific design challenges.

  • U.S. Commercial Drone Delivery Comes Closer
    by Stephen Cass on 17. April 2024. at 15:10

    Stephen Cass: Hello and welcome to Fixing the Future, an IEEE Spectrum podcast where we look at concrete solutions to tough problems. I’m your host, Stephen Cass, a senior editor at IEEE Spectrum. And before I start, I just want to tell you that you can get the latest coverage of some of Spectrum’s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe. We’ve been covering the drone delivery company Zipline in Spectrum for several years, and I do encourage listeners to check out our great onsite reporting from Rwanda in 2019 when we visited one of Zipline’s dispatch centers for delivering vital medical supplies into rural areas. But now it’s 2024, and Zipline is expanding into commercial drone delivery in the United States, including into urban areas, and hitting some recent milestones. Here to talk about some of those milestones today, we have Keenan Wyrobek, Zipline’s co-founder and CTO. Keenan, welcome to the show. Keenan Wyrobek: Great to be here. Thanks for having me. Cass: So before we get into what’s going on with the United States, can you first catch us up on how things have been going on with Rwanda and the other African countries you’ve been operating in? Wyrobek: Yeah, absolutely. So we’re now operating in eight countries, including here in the US. That includes a handful of countries in Africa, as well as Japan and Europe. So in Africa, it’s really exciting. So the scale is really impressive, basically. As we’ve been operating, started eight years ago with blood, then moved into vaccine delivery and delivering many other things in the healthcare space, as well as outside the healthcare space. We can talk a little bit about in things like animal husbandry and other things. The scale is really what’s exciting. We have a single distribution center there that now regularly flies more than the equivalent of once the equator of the Earth every day. And that’s just from one of a whole bunch of distribution centers. That’s where we are really with that operation today. Cass: So could you talk a little bit about those non-medical systems? Because this was very much how we’d seen blood being parachuted down from these drones and reaching those distant centers. What other things are you delivering there? Wyrobek: Yeah, absolutely. So start with blood, like you said, then vaccines. We’ve now done delivered well over 15 million vaccine doses, lots of other pharmaceutical use cases to hospitals and clinics, and more recently, patient home delivery for chronic care of things like hypertension, HIV-positive patients, and things like that. And then, yeah, moved into some really exciting use cases and things like animal husbandry. One that I’m personally really excited about is supporting these genetic diversity campaigns. It’s one of those things very unglamorous, but really impactful. One of the main sources of protein around the world is cow’s milk. And it turns out the difference between a non-genetically diverse cow and a genetically diverse cow can be 10x difference in milk production. And so one of the things we deliver is bull semen. We’re very good at the cold chain involved in that as we’ve mastered in vaccines and blood. And that’s just one of many things we’re doing in other spaces outside of healthcare directly. Cass: Oh, fascinating. So turning now to the US, it seems like there’s been two big developments recently. One is you’re getting close to deploying Platform 2, which has some really fascinating tech that allows packages to be delivered very precisely by tether. And I do want to talk about that later. But first, I want to talk about a big milestone you had late last year. And this was something that goes by the very unlovely acronym of a BVLOS flight. Can you tell us what a BVLOS stands for and why that flight was such a big deal? Wryobek: Yeah, “beyond visual line of sight.” And so that is basically, before this milestone last year, all drone deliveries, all drone operations in the US were done by people standing on the ground, looking at the sky, that line of sight. And that’s how basically we made sure that the drones were staying clear of aircraft. This is true of everybody. Now, this is important because in places like the United States, many aircraft don’t and aren’t required to carry a transponder, right? So transponders where they have a radio signal that they’re transmitting their location that our drones can listen to and use to maintain separation. And so the holy grail of basically scalable drone operations, of course, it’s physically impossible to have people standing around all the world staring at the sky, and is a sensing solution where you can sense those aircraft and avoid those aircraft. And this is something we’ve been working on for a long time and got the approval for late last year with the FAA, the first-ever use of sensors to detect and avoid for maintaining safety in the US airspace, which is just really, really exciting. That’s now been in operations in two distribution centers here, one in Utah and one in Arkansas ever since. Cass: So could you just tell us a little bit about how that tech works? It just seems to be quite advanced to trust a drone to recognize, “Oh, that is an actual airplane that’s a Cessna that’s going to be here in about two minutes and is a real problem,” or, “No, it’s a hawk, which is just going about his business and I’m not going to ever come close to it at all because it’s so far away. Wryobek: Yeah, this is really fun to talk about. So just to start with what we’re not doing, because most people expect us to use either a radar for this or cameras for this. And basically, those don’t work. And the radar, you would need such a heavy radar system to see 360 degrees all the way around your drone. And this is really important because two things to kind of plan in your mind. One is we’re not talking about autonomous driving where cars are close together. Aircraft never want to be as close together as cars are on a road, right? We’re talking about maintaining hundreds of meters of separation, and so you sense it a long distance. And drones don’t have right of way. So what that means is even if a plane’s coming up behind the drone, you got to sense that plane and get out of the way. And so to have enough radar on your drone that you can actually see far enough to maintain that separation in every direction, you’re talking about something that weighs many times the weight of a drone and it just doesn’t physically close. And so we started there because that’s sort of where we assumed and many people assume that’s the place to start. Then looked at cameras. Cameras have lots of drawbacks. And fundamentally, you can sort of-- we’ve all had this, you taken your phone and tried to take a picture of an airplane and you look at the picture, you can’t see the airplane. Yeah. It takes so many pixels of perfectly clean lenses to see an aircraft at a kilometer or two away that it really just is not practical or robust enough. And that’s when we went back to the drawing board and it ended up where we ended up, which is using an array of microphones to listen for aircraft, which works very well at very long distances to then maintain separation from those other aircraft. Cass: So yeah, let’s talk about Platform 2 a little bit more because I should first explain for listeners who maybe aren’t familiar with Zipline that these are not the kind of the little purely sort of helicopter-like drones. These are these fixed wing with sort of loiter capability and hovering capabilities. So they’re not like your Mavic drones and so on. These have a capacity then for long-distance flight, which is what it gives them. Wyrobek: Yeah. And maybe to jump into Platform 2— maybe starting with Platform 1, what does it look like? So Platform 1 is what we’ve been operating around the world for years now. And this basically looks like a small airplane, right? In the industry referred to as a fixed-wing aircraft. And it’s fixed wing because to solve the problem of going from a metro area to surrounding countryside, really two things matter. Your range and long range and low cost. And a fixed-wing aircraft over something that can hover has something like an 800% advantage in range and cost. And that’s why we did fix wing because it actually works for our customers for their needs for that use case. Platform 2 is all about, how do you deliver to homes and in metro areas where you need an incredible amount of precision to deliver to nearly every home. And so Platform 2—we call our drone zips—our drone, it flies out to the delivery site. Instead of floating a package down to a customer like Platform 1 does, it hovers. Platform 2 hovers and lowers down what we call a droid. And so the droids on tether. The drone stays way up high, about 100 meters up high, and the drone lowers down. And the drone itself-- sorry, the droid itself, it lowers down, it can fly. Right? So you think of it as like the tether does the heavy lifting, but the droid has fans. So if it gets hit by a gust of wind or whatnot, it can still stay very precisely on track and come in and deliver it to a very small area, put the package down, and then be out of there seconds later. Cass: So let me get this right. Platform 2 is kind of as a combo, fixed wing and rotor wing. It’s like a VTOL like that. I’m cheating here a little bit because my colleague Evan Ackerman has a great Q&A on the Spectrum website with you, some of your team members about the nitty-gritty of how that design was evolved. But first off, it’s like a little droid thing at the end of the tether. How much extra precision do all those fans and stuff give you? Wyrobek: Oh, massive, right? We can come down and hit a target within a few centimeters of where we want to deliver, which means we can deliver. Like if you have a small back porch, which is really common, right, in a lot of urban areas to have a small back porch or a small place on your roof or something like that, we can still just deliver as long as we have a few feet of open space. And that’s really powerful for being able to serve our customers. And a lot of people think of Platform 2 as like, “Hey, it’s a slightly better way of doing maybe a DoorDash-style operation, people in cars driving around.” And to be clear, it’s not slightly better. It’s massively better, much faster, more environmentally friendly. But we have many contracts for Platform 2 in the health space with US Health System Partners and Health Systems around the world. And what’s powerful about these customers in terms of their needs is they really need to serve all of their customers. And this is where a lot of our sort of-- this is where our engineering effort goes is how do you make a system that doesn’t just kind of work for some folks, and they can use it if they want to, but a health system is like, “No, I want this to work for everybody in my health network.” And so how do we get to that near 100 percent serviceability? And that’s what this droid really enables us to do. And of course, it has all these other magic benefits too. It makes some of the hardest design problems in this space much, much easier. The safety problem gets much easier by keeping the drone way up high. Cass: Yeah, how high is Platform 2 hovering when it’s doing its deliveries? Wyrobek: About 100 meters, so 300 plus feet, right? We’re talking about high up as a football field is long. And so it’s way up there. And it also helps with things like noise, right? We don’t want to live in a future where drones are all around us sounding like swarms of insects. We want drones to make no noise. We want them to just melt into the background. And so it makes that kind of problem much easier as well. And then, of course, the droid gets other benefits where for many products, we don’t need any packaging at all. We can just deliver the product right onto a table in your porch. And not just from a cost perspective, but again, from— we’re all familiar with the nightmare of packaging from deliveries we get. Eliminating packaging just has to be our future. And we’re really excited to advance that future. Cass: From Evan’s Q&A, I know that a lot of effort went into making the droid element look rather adorable. Why was that so important? Wryobek: Yeah, I like to describe it as sort of a cross between three things, if you kind of picture this, like a miniature little fan boat, right, because it has some fan, a big fan on the back, looks like a little fan boat, combined with sort of a baby seal, combined with a toaster. It sort of has that look to it. And making it adorable, there’s a bunch of sort of human things that matter, right? I want this to be something that when my grandmother, who’s not a tech-savvy, gets these deliveries, it’s approachable. It doesn’t come off as sort of scary. And when you make something cute, not only does it feel approachable, but it also forces you to get the details right so it is approachable, right? The rounded corners, right? This sounds really benign, but a lot of robots, it turns out if you bump into them, they scratch you. And we want you to be able to bump into this droid, and this is no big deal. And so getting the surfaces right, getting them— the surface is made sort of like a helmet foam. If you can picture that, right? The kind of thing you wouldn’t be afraid to touch if it touched you. And so getting it both to be something that feels safe, but is something that actually is safe to be around, those two things just matter a lot. Because again, we’re not designing this for some piloty kind of low-volume thing. Our customers want this in phenomenal volume. And so we really want this to be something that we’re all comfortable around. Cass: Yeah, and one thing I want to pull out from that Q&A as well is it was an interesting note, because you mentioned it has three fans, but they’re rather unobtrusive. And the original design, you had two big fans on the sides, which was very great for maneuverability. But you had to get rid of those and come up with a three-fan design. And maybe you can explain why that was so. Wryobek: Yeah, that’s a great detail. So the original design, the picture, it was like, imagine the package in the middle, and then kind of on either side of the package, two fans. So when you looked at it, it kind of looked like— I don’t know. It kind of looked like the package had big mouse ears or something. And when you looked at it, everybody had the same reaction. You kind of took this big step back. It was like, “Whoa, there’s this big thing coming down into my yard.” And when you’re doing this kind of user testing, we always joke, you don’t need to bring users in if it already makes you take a step back. And this is one of those things where like, “That’s just not good enough, right, to even start with that kind of refined design.” But when we got the sort of profile of it smaller, the way we think about it from a design experiment perspective is we want to deliver a large package. So basically, the droid needs to be as sucked down as small additional volume around that package as possible. So we spent a lot of time figuring out, “Okay, how do you do that sort of physically and aesthetically in a way that also gets that amazing performance, right? Because when I say performance, what I’m talking about is we still need it to work when the winds are blowing really hard outside and still can deliver precisely. And so it has to have a lot of aero performance to do that and still deliver precisely in essentially all weather conditions. Cass: So I guess I just want to ask you then is, what kind of weight and volume are you able to deliver with this level of precision? Wryobek: Yeah, yeah. So we’ll be working our way up to eight pounds. I say working our way up because that’s part of, once you launch a product like this, there’s refinement you can do overtime on many layers, but eight pounds, which was driven off, again, these health use cases. So it does basically 100 percent of what our health partners need to do. And it turns out it’s, nearly 100 percent of what we want to do in meal delivery. And even in the goods sector, I’m impressed by the percentage of goods we can deliver. One of our partners we work with, we can deliver over 80 percent of what they have in their big box store. And yeah, it’s wildly exceeding expectations on nearly every axis there. And volume, it’s big. It’s bigger than a shoebox. I don’t have a great-- I’m trying to think of a good reference to kind of bring it to life. But it looks like a small cooler basically inside. And it can comfortably fit a meal for four to give you a sense of the amount of food you can fit in there. Yeah. Cass: So we’ve seen this history of Zipline in rural areas, and now we’re talking about expanding operations in more urban areas, but just how urban? I don’t imagine that we’ll see the zip lines of zooming around, say, the very hemmed-in streets, say, here in Midtown Manhattan. So what level of urban are we talking about? Wryobek: Yeah, so the way we talk about it internally in our design process is basically we call three-story sprawl. Manhattan is the place where when we think of New York, we’re not talking about Manhattan, but most of the rest of New York, we are talking about it, right? Like the Bronx, things like that. We just have this sort of three stories forever. And that’s a lot of the world out here in California, that’s most of San Francisco. I think it’s something like 98 percent of San Francisco is that. If you’ve ever been to places like India and stuff like that, the cities, it’s just sort of this three stories going for a really long way. And that’s what we’re really focused on. And that’s also where we provide that incredible value because that’s also matches where the hardest traffic situations and things like that can make any other sort of terrestrial on-demand delivery be phenomenally late. Cass: Well, no, I live out in Queens, so I agree there’s not much skyscrapers out there. Although there are quite a few trees and so on, but at the same time, there’s usually some sort of sidewalk availability. So is that kind of what you’re hoping to get into? Wyrobek: Exactly. So as long as you’ve got a porch with a view of the sky or an alley with a view of the sky, it can be literally just a few feet, we can get in there, make a delivery, and be on our way. Cass: And so you’ve done this preliminary test with the FAA, the BVLOS test, and so on. How close do you think you are to, and you’re working with a lot of partners, to really seeing this become routine commercial operations? Wyrobek: Yeah, yeah. So at relatively limited scale, our operations here in Utah and in Arkansas that are leveraging that FAA approval for beyond visual line-of-sight flight operations, that’s been all day, every day now since our approval last year. With Platform 2, we’re really excited. That’s coming later this year. We’re currently in the phase of basically massive-scale testing. So we now have our production hardware and we’re taking it through a massive ground testing campaign. So this picture dozens of thermal chambers and five chambers and things like that just running to really both validate that we have the reliability we need and flush out any issues that we might have missed so we can address that difference between what we call the theoretical reliability and the actual reliability. And that’s running in parallel to a massive flight test campaign. Same idea, right? We’re slowly ramping up the flight volume as we fly into heavier conditions really to make sure we know the limits of the system. We know its actual reliability and true scaled operations so we can get the confidence that it’s ready to operate for people. Cass: So you’ve got Platform 2. What’s kind of next on your technology roadmap for any possible platform three? Wyrobek: Oh, great question. Yeah, I can’t comment on platform three at this time, but. And I will also say, Zipline is pouring our heart into Platform 2 right now. Getting Platform 2 ready for this-- the way I like to talk about this internally is today, we fly about four times the equator of the Earth in our operations on average. And that’s a few thousand flights per day. But the demand we have is for more like millions of flights per day, if not beyond. And so on the log scale, right, we’re halfway there. Three hours of magnitude down, three more zeros to come. And the level of testing, the level of systems engineering, the level of refinement required to do that is a lot. And there’s so many systems from weather forecasting to our onboard autonomy and our fleet management systems. And so to highlight one team, our system test team run by this really impressive individual named Juan Albanell, this team has taken us from where we were two years ago, where we had shown the concept at a very prototype stage of this delivery experience, and we’ve done the first order math kind of on the architecture and things like that through the iterations in test to actually make sure we had a drone that could actually fly in all these weather conditions with all the robustness and tolerance required to actually go to this global scale that Platform 2 is targeting. Cass: Well, that’s fantastic. Well, I think there’s a lot more to talk about to come up in the future, and we look forward to talking with Zipline again. But for today, I’m afraid we’re going to have to leave it there. But it was really great to have you on the show, Keenan. Thank you so much. Wyrobek: Cool. Absolutely, Stephen. It was a pleasure to speak with you. Cass: So today on Fixing the Future, we were talking with Zipline’s Keenan Wyrobek about the progress of commercial drone deliveries. For IEEE Spectrum, I’m Stephen Cass, and I hope you’ll join us next time.

  • Boston Dynamics’ Robert Playter on the New Atlas
    by Evan Ackerman on 17. April 2024. at 13:15

    Boston Dynamics has just introduced a new Atlas humanoid robot, replacing the legendary hydraulic Atlas and intended to be a commercial product. This is huge news from the company that has spent the last decade building the most dynamic humanoids that the world has ever seen, and if you haven’t read our article about the announcement (and seen the video!), you should do that right now. We’ve had about a decade of pent-up questions about an all-electric productized version of Atlas, and we were lucky enough to speak with Boston Dynamics CEO Robert Playter to learn more about where this robot came from and how it’s going to make commercial humanoid robots (finally) happen. Robert Playter was the Vice President of Engineering at Boston Dynamics starting in 1994, which I’m pretty sure was back when Boston Dynamics still intended to be a modeling and simulation company rather than a robotics company. Playter became the CEO in 2019, helping the company make the difficult transition from R&D to commercial products with Spot, Stretch, and now (or very soon) Atlas. We talked with Playter about what the heck took Boston Dynamics so long to make this robot, what the vision is for Atlas as a product, all that extreme flexibility, and what comes next. Robert Playter on: What Took So Long The Product Approach A General Purpose Robot? Hydraulic Versus Electric Extreme Range of Motion Atlas’ Head Advantages in Commercialization What’s Next IEEE Spectrum: So what’s going on? Robert Playter: Boston Dynamics has built an all-electric humanoid. It’s our newest generation of what’s been an almost 15-year effort in developing humanoids. We’re going to launch it as a product, targeting industrial applications, logistics, and places that are much more diverse than where you see Stretch—heavy objects with complex geometry, probably in manufacturing type environments. We’ve built our first robot, and we believe that’s really going to set the bar for the next generation of capabilities for this whole industry. What took you so long?! Playter: Well, we wanted to convince ourselves that we knew how to make a humanoid product that can handle a great diversity of tasks—much more so than our previous generations of robots—including at-pace bimanual manipulation of the types of heavy objects with complex geometry that we expect to find in industry. We also really wanted to understand the use cases, so we’ve done a lot of background work on making sure that we see where we can apply these robots fruitfully in industry. We’ve obviously been working on this machine for a while, as we’ve been doing parallel development with our legacy Atlas. You’ve probably seen some of the videos of Atlas moving struts around—that’s the technical part of proving to ourselves that we can make this work. And then really designing a next generation machine that’s going to be an order of magnitude better than anything the world has seen. “We’re not anxious to just show some whiz-bang tech, and we didn’t really want to indicate our intent to go here until we were convinced that there is a path to a product.” —Robert Playter, Boston Dynamics With Spot, it felt like Boston Dynamics developed the product first, without having a specific use case in mind: you put the robot out there and let people discover what it was good for. Is your approach different with Atlas? Playter: You’re absolutely right. Spot was a technology looking for a product, and it’s taken time for us to really figure out the product market fit that we have in industrial inspection. But the challenge of that experience has left us wiser about really identifying the target applications before you say you’re going to build these things at scale. Stretch is very different, because it had a clear target market. Atlas is going to be more like Stretch, although it’s going to be way more than a single task robot, which is kind of what Stretch is. Convincing ourselves that we could really generalize with Atlas has taken a little bit of time. This is going to be our third product in about four years. We’ve learned so much, and the world is different from that experience. [back to top] Is your vision for Atlas one of a general purpose robot? Playter: It definitely needs to be a multi-use case robot. I believe that because I don’t think there’s very many examples where a single repetitive task is going to warrant these complex robots. I also think, though, that the practical matter is that you’re going to have to focus on a class of use cases, and really making them useful for the end customer. The lesson we’ve learned with both Spot and Stretch is that it’s critical to get out there and actually understand what makes this robot valuable to customers while making sure you’re building that into your development cycle. And if you can start that before you’ve even launched the product, then you’ll be better off. [back to top] How does thinking of this new Atlas as a product rather than a research platform change things? Playter: I think the research that we’ve done over the past 10 or 15 years has been essential to making a humanoid useful in the first place. We focused on dynamic balancing and mobility and being able to pick something up and still maintain that mobility—those were research topics of the past that we’ve now figured out how to manage and are essential, I think, to doing useful work. There’s still a lot of work to be done on generality, so that humanoids can pick up any one of a thousand different parts and deal with them in a reasonable way. That level of generality hasn’t been proven yet; we think there’s promise, and that AI will be one of the tools that helps solve that. And there’s still a lot of product prototyping and iteration that will come out before we start building massive numbers of these things and shipping them to customers. “This robot will be stronger at most of its joints than a person, and even an elite athlete, and will have a range of motion that exceeds anything a person can ever do.” —Robert Playter, Boston Dynamics For a long time, it seemed like hydraulics were the best way of producing powerful dynamic motions for robots like Atlas. Has that now changed? Playter: We first experimented with that with the launch of Spot. We had the same issue years ago, and discovered that we could build powerful lightweight electric motors that had the same kind of responsiveness and strength, or let’s say sufficient responsiveness and strength, to really make that work. We’ve designed an even newer set of really compact actuators into our electric Atlas, which pack the strength of essentially an elite human athlete into these tiny packages that make an electric humanoid feasible for us. So, this robot will be stronger at most of its joints than a person, and even an elite athlete, and will have a range of motion that exceeds anything a person can ever do. We’ve also compared the strength of our new electric Atlas to our hydraulic Atlas, and the electric Atlas is stronger. [back to top] In the context of Atlas’ range of motion, that introductory video was slightly uncomfortable to watch, which I’m sure was deliberate. Why introduce the new Atlas in that way? Playter: These high range of motion actuators are going to enable a unique set of movements that ultimately will let the robot be very efficient. Imagine being able to turn around without having to take a bunch of steps to turn your whole body instead. The motions we showed [in the video] are ones where our engineers were like, “hey, with these joints, we could get up like this!” And it just wasn’t something we had that really thought about before. This flexibility creates a palette that you can design new stuff on, and we’re already having fun with it and we decided we wanted to share that excitement with the world. [back to top] “Everybody will buy one robot—we learned that with Spot. But they won’t start by buying fleets, and you don’t have a business until you can sell multiple robots to the same customer.” —Robert Playter, Boston Dynamics This does seem like a way of making Atlas more efficient, but I’ve heard from other folks working on humanoids that it’s important for robots to move in familiar and predictable ways for people to be comfortable working around them. What’s your perspective on that? Playter: I do think that people are going to have to become familiar with our robot; I don’t think that means limiting yourself to human motions. I believe that ultimately, if your robot is stronger or more flexible, it will be able to do things that humans can’t do, or don’t want to do. One of the real challenges of making a product useful is that you’ve got to have sufficient productivity to satisfy a customer. If you’re slow, that’s hard. We learned that with Stretch. We had two generations of Stretch, and the first generation did not have a joint that let it pivot 180 degrees, so it had to ponderously turn around between picking up a box and dropping it off. That was a killer. And so we decided “nope, gotta have that rotational joint.” It lets Stretch be so much faster and more efficient. At the end of the day, that’s what counts. And people will get used to it. What can you tell me about the head? Boston Dynamics CEO Robert Playter said the head on the new Atlas robot has been designed not to mimic the human form but rather “to project something else: a friendly place to look to gain some understanding about the intent of the robot.”Boston Dynamics Playter: The old Atlas did not have an articulated head. But having an articulated head gives you a tool that you can use to indicate intent, and there are integrated lights which will be able to communicate to users. Some of our original concepts had more of a [human] head shape, but for us they always looked a little bit threatening or dystopian somehow, and we wanted to get away from that. So we made a very purposeful decision about the head shape, and our explicit intent was for it not to be human-like. We’re trying to project something else: a friendly place to look to gain some understanding about the intent of the robot. The design borrows from some friendly shapes that we’d seen in the past. For example, there’s the old Pixar lamp that everybody fell in love with decades ago, and that informed some of the design for us. [back to top] How do you think the decade(s) of experience working on humanoids as well as your experience commercializing Spot will benefit you when it comes to making Atlas into a product? Playter: This is our third product, and one of the things we’ve learned is that it takes way more than some interesting technology to make a product work. You have to have a real use case, and you have to have real productivity around that use case that a customer cares about. Everybody will buy one robot—we learned that with Spot. But they won’t start by buying fleets, and you don’t have a business until you can sell multiple robots to the same customer. And you don’t get there without all this other stuff—the reliability, the service, the integration. When we launched Spot as a product several years ago, it was really about transforming the whole company. We had to take on all of these new disciplines: manufacturing, service, measuring the quality and reliability of our robots and then building systems and tools to make them steadily better. That transformation is not easy, but the fact that we’ve successfully navigated through that as an organization means that we can easily bring that mindset and skill set to bear as a company. Honestly, that transition takes two or three years to get through, so all of the brand new startup companies out there who have a prototype of a humanoid working—they haven’t even begun that journey. There’s also cost. Building something effectively at a reasonable cost so that you can sell it at a reasonable cost and ultimately make some money out of it, that’s not easy either. And frankly, without the support of Hyundai which is of course a world-class manufacturing expert, it would be really challenging to do it on our own. So yeah, we’re much more sober about what it takes to succeed now. We’re not anxious to just show some whiz-bang tech, and we didn’t really want to indicate our intent to go here until we were convinced that there is a path to a product. And I think ultimately, that will win the day. [back to top] What will you be working on in the near future, and what will you be able to share? Playter: We’ll start showing more of the dexterous manipulation on the new Atlas that we’ve already shown on our legacy Atlas. And we’re targeting proof of technology testing in factories at Hyundai Motor Group [HMG] as early as next year. HMG is really excited about this venture; they want to transform their manufacturing and they see Atlas as a big part of that, and so we’re going to get on that soon. [back to top] What do you think other robotics folks will find most exciting about the new Atlas? Playter: Having a robot with so much power and agility packed into a relatively small and lightweight package. I’ve felt honored in the past that most of these other companies compare themselves to us. They say, “well, where are we on the Boston Dynamics bar?” I think we just raised the bar. And that’s ultimately good for the industry, right? People will go, “oh, wow, that’s possible!” And frankly, they’ll start chasing us as fast as they can—that’s what we’ve seen so far. I think it’ll end up pulling the whole industry forward.

  • Hello, Electric Atlas
    by Evan Ackerman on 17. April 2024. at 13:15

    Yesterday, Boston Dynamics bid farewell to the iconic Atlas humanoid robot. Or, the hydraulically-powered version of Atlas, anyway—if you read between the lines of the video description (or even just read the actual lines of the video description), it was pretty clear that although hydraulic Atlas was retiring, it wasn’t the end of the Atlas humanoid program at Boston Dynamics. In fact, Atlas is already back, and better than ever. Today, Boston Dynamics is introducing a new version of Atlas that’s all-electric. It’s powered by batteries and electric actuators, no more messy hydraulics. It exceeds human performance in terms of both strength and flexibility. And for the first time, Boston Dynamics is calling this humanoid robot a product. We’ll take a look at everything that Boston Dynamics is announcing today, and have even more detail in this Q&A with Boston Dynamics CEO Robert Playter. Boston Dynamics’ new electric humanoid has been simultaneously one of the worst and best kept secrets in robotics over the last year or so. What I mean is that it seemed obvious, or even inevitable, that Boston Dynamics would take the expertise in humanoids that it developed with Atlas and combine that with its experience productizing a fully electric system like Spot. But just because something seems inevitable doesn’t mean it actually is inevitable, and Boston Dynamics has done an admirable job of carrying on as normal while building a fully electric humanoid from scratch. And here it is: It’s all new, it’s all electric, and some of those movements make me slightly uncomfortable (we’ll get into that in a bit). The blog post accompanying the video is sparse on technical detail, but let’s go through the most interesting parts: A decade ago, we were one of the only companies putting real R&D effort into humanoid robots. Now the landscape in the robotics industry is very different. In 2010, we took a look at all the humanoid robots then in existence. You could, I suppose, argue that Honda was putting real R&D effort into ASIMO back then, but yeah, pretty much all those other humanoid robots came from research rather than industry. Now, it feels like we’re up to our eyeballs in commercial humanoids, but over the past couple of years, as startups have appeared out of nowhere with brand new humanoid robots, Boston Dynamics (to most outward appearances) was just keepin’ on with that R&D. Today’s announcement certainly changes that. We are confident in our plan to not just create an impressive R&D project, but to deliver a valuable solution. This journey will start with Hyundai—in addition to investing in us, the Hyundai team is building the next generation of automotive manufacturing capabilities, and it will serve as a perfect testing ground for new Atlas applications. Boston Dynamics This is a significant advantage for Boston Dynamics—through Hyundai, they can essentially be their own first customer for humanoid robots, offering an immediate use case in a very friendly transitional environment. Tesla has a similar advantage with Optimus, but Boston Dynamics also has experience sourcing and selling and supporting Spot, which are those business-y things that seem like they’re not the hard part until they turn out to actually be the hard part. In the months and years ahead, we’re excited to show what the world’s most dynamic humanoid robot can really do—in the lab, in the factory, and in our lives. World’s most dynamic humanoid, you say? Awesome! Prove it! On video! With outtakes! The electric version of Atlas will be stronger, with a broader range of motion than any of our previous generations. For example, our last generation hydraulic Atlas (HD Atlas) could already lift and maneuver a wide variety of heavy, irregular objects; we are continuing to build on those existing capabilities and are exploring several new gripper variations to meet a diverse set of expected manipulation needs in customer environments. Now we’re getting to the good bits. It’s especially notable here that the electric version of Atlas will be “stronger” than the previous hydraulic version, because for a long time hydraulics were really the only way to get the kind of explosively powerful repetitive dynamic motions that enabled Atlas to do jumps and flips. And the switch away from hydraulics enables that extra range of motion now that there aren’t hoses and stuff to deal with. It’s also pretty clear that the new Atlas is built to continue the kind of work that hydraulic Atlas has been doing, manipulating big and heavy car parts. This is in sharp contrast to most other humanoid robots that we’ve seen, which have primarily focused on moving small objects or bins around in warehouse environments. We are not just delivering industry-leading hardware. Some of our most exciting progress over the past couple of years has been in software. In addition to our decades of expertise in simulation and model predictive control, we have equipped our robots with new AI and machine learning tools, like reinforcement learning and computer vision to ensure they can operate and adapt efficiently to complex real-world situations. This is all par for the course now, but it’s also not particularly meaningful without more information. “We will give our robots new capabilities through machine learning and AI” is what every humanoid robotics company (and most other robotics companies) are saying, but I’m not sure that we’re there yet, because there’s an “okay but how?” that needs to happen first. I’m not saying that it won’t happen, just pointing out that until it does happen, it hasn’t happened. The humanoid form factor is a useful design for robots working in a world designed for people. However, that form factor doesn’t limit our vision of how a bipedal robot can move, what tools it needs to succeed, and how it can help people accomplish more. Agility Robotics has a similar philosophy with Digit, which has a mostly humanoid form factor to operate in human environments but also uses a non-human leg design because Agility believes that it works better. Atlas is a bit more human-like with its overall design, but there are some striking differences, including both range of motion and the head, both of which we’ll be talking more about. We designed the electric version of Atlas to be stronger, more dexterous, and more agile. Atlas may resemble a human form factor, but we are equipping the robot to move in the most efficient way possible to complete a task, rather than being constrained by a human range of motion. Atlas will move in ways that exceed human capabilities. The introductory video with the new Atlas really punches you in the face with this: Atlas is not constrained by human range of motion and will leverage its extra degrees of freedom to operate faster and more efficiently, even if you personally might find some of those motions a little bit unsettling. Boston Dynamics Combining decades of practical experience with first principles thinking, we are confident in our ability to deliver a robot uniquely capable of tackling dull, dirty, and dangerous tasks in real applications. As Marco Hutter pointed out, most commercial robots (humanoids included) are really only targeting tasks that are dull, because dull usually means repetitive, and robots are very good at repetitive. Dirty is a little more complicated, and dangerous is a lot more complicated than that. I appreciate that Boston Dynamics is targeting those other categories of tasks from the outset. Commercialization takes great engineering, but it also takes patience, imagination, and collaboration. Boston Dynamics has proven that we can deliver the full package with both industry-leading robotics and a complete ecosystem of software, services, and support to make robotics useful in the real world. There’s a lot more to building a successful robotics company than building a successful robot. Arguably, building a successful robot is not even the hardest part, long term. Having over 1500 Spot robots deployed with customers gives them a well-established product infrastructure baseline to expand from with the new Atlas. Taking a step back, let’s consider the position that Boston Dynamics is in when it comes to the humanoid space right now. The new Atlas appears to be a reasonably mature platform with explicit commercial potential, but it’s not yet clear if this particular version of Atlas is truly commercially viable, in terms of being manufacturable and supportable at scale—it’s Atlas 001, after all. There’s likely a huge amount of work that still needs to be done, but it’s a process that the company has already gone through with Spot. My guess is that Boston Dynamics has some catching up to do with respect to other humanoid companies that are already entering pilot projects. In terms of capabilities, even though the new Atlas hardware is new, it’s not like Boston Dynamics is starting from scratch, since they’re already transferring skills from hydraulic Atlas onto the new platform. But, we haven’t seen the new Atlas doing any practical tasks yet, so it’s hard to tell how far along that is, and it would be premature to assume that hydraulic Atlas doing all kinds of amazing things in YouTube videos implies that electric Atlas can do similar things safely and reliably in a product context. There’s a gap there, possibly an enormous gap, and we’ll need to see more from the new Atlas to understand where it’s at. And obviously, there’s a lot of competition in humanoids right now, although I’d like to think that the potential for practical humanoid robots to be useful in society is significant enough that there will be room for lots of different approaches. Boston Dynamics was very early to humanoids in general, but they’re somewhat late to this recent (and rather abrupt) humanoid commercialization push. This may not be a problem, especially if Atlas is targeting applications where its strength and flexibility sets it apart from other robots in the space, and if their depth of experience deploying commercial robotic platforms helps them to scale quickly. Boston Dynamics An electric Atlas may indeed have been inevitable, and it’s incredibly exciting to (finally!) see Boston Dynamics take this next step towards a commercial humanoid, which would deliver on more than a decade of ambition stretching back through the DARPA Robotics Challenge to PETMAN. We’ve been promised more manipulation footage soon, and Boston Dynamics expects that Atlas will be in the technology demonstration phase in Hyundai factories as early as next year. We have a lot more questions, but we have a lot more answers, too: you’ll find a Q&A with Boston Dynamics CEO Robert Playter right here.

  • The Legacy of the Datapoint 2200 Microcomputer
    by Qusi Alqarqaz on 16. April 2024. at 18:00

    As the history committee chair of the IEEE Lone Star Section, in San Antonio, Texas, I am responsible for documenting, preserving, and raising the visibility of technologies developed in the local area. One such technology is the Datapoint 2200, a programmable terminal that laid the foundation for the personal computer revolution. Launched in 1970 by Computer Terminal Corp. (CTC) in San Antonio, the machine played a significant role in the early days of microcomputers. The pioneering system integrated a CPU, memory, and input/output devices into a single unit, making it a compact, self-contained device. Apple, IBM, and other companies are often associated with the popularization of PCs; we must not overlook the groundbreaking innovations introduced by the Datapoint. The machine might have faded from memory, but its influence on the evolution of computing technology cannot be denied. The IEEE Region 5 life members committee honored the machine in 2022 with its Stepping Stone Award, but I would like to make more members aware of the innovations introduced by the machine’s design. From mainframes to microcomputers Before the personal computer, there were mainframe computers. The colossal machines, with their bulky, green monitors housed in meticulously cooled rooms, epitomized the forefront of technology at the time. I was fortunate to work with mainframes during my second year as an electrical engineering student in the United Arab Emirates University at Al Ain, Abu Dhabi, in 1986. The machines occupied entire rooms, dwarfing the personal computers we are familiar with today. Accessing the mainframes involved working with text-based terminals that lacked graphical interfaces and had limited capabilities. Those relatively diminutive terminals that interfaced with the machines often provided a touch of amusement for the students. The mainframe rooms served as social places, fostering interactions, collaborations, and friendly competitions. Operating the terminals required mastering specific commands and coding languages. The process of submitting computing jobs and waiting for results without immediate feedback could be simultaneously amusing and frustrating. Students often humorously referred to the “black hole,” where their jobs seemed to vanish until the results materialized. Decoding enigmatic error messages became a challenge, yet students found joy in deciphering them and sharing amusing examples. Despite mainframes’ power, they had restricted processing capabilities and memory compared with today’s computers. The introduction of personal computers during my senior year was a game-changer. Little did I know that it would eventually lead me to San Antonio, Texas, birthplace of the PC, where I would begin a new chapter of my life. The first PC In San Antonio, a group of visionary engineers from NASA founded CTC with the goal of revolutionizing desktop computing. They introduced the Datapoint 3300 as a replacement for Teletype terminals. Led by Phil Ray and Gus Roche, the company later built the first personal desktop computer, the Datapoint 2200. They also developed LAN technology and aimed to replace traditional office equipment with electronic devices operable from a single terminal. The Datapoint 2200 introduced several design elements that later were adopted by other computer manufacturers. It was one of the first computers to use a keyboard similar to a typewriter’s, and a monitor for user interaction—which became standard input and output devices for personal computers. They set a precedent for user-friendly computer interfaces. The machine also had cassette tape drives for storage, predecessors of disk drives. The computer had options for networking, modems, interfaces, printers, and a card reader. It used different memory sizes and employed an 8-bit processor architecture. The Datapoint’s CPU was initially intended to be a custom chip, which eventually came to be known as the microprocessor. At the time, no such chips existed, so CTC contracted with Intel to produce one. That chip was the Intel 8008, which evolved into the Intel 8080. Introduced in 1974, the 8080 formed the basis for small computers, according to an entry about early microprocessors in the Engineering and Technology History Wiki. Those first 8-bit microprocessors are celebrating their 50th anniversary this year. The 2200 was primarily marketed for business use, and its introduction helped accelerate the adoption of computer systems in a number of industries, according to Lamont Wood, author of Datapoint: The Lost Story of the Texans Who Invented the Personal Computer Revolution. The machine popularized the concept of computer terminals, which allowed multiple users to access a central computer system remotely, Wood wrote. It also introduced the idea of a terminal as a means of interaction with a central computer, enabling users to input commands and receive output. The concept laid the groundwork for the development of networking and distributed computing. It eventually led to the creation of LANs and wide-area networks, enabling the sharing of resources and information across organizations. The concept of computer terminals influenced the development of modern networking technologies including the Internet, Wood pointed out. How Datapoint inspired Apple and IBM Although the Datapoint 2200 was not a consumer-oriented computer, its design principles and influence played a role in the development of personal computers. Its compact, self-contained nature demonstrated the feasibility and potential of such machines. The Datapoint sparked the imagination of researchers and entrepreneurs, leading to the widespread availability of personal computers. Here are a few examples of how manufacturers built upon the foundation laid by the Datapoint 2200: Apple drew inspiration from early microcomputers. The Apple II, introduced in 1977, was one of the first successful personal computers. It incorporated a keyboard, a monitor, and a cassette tape interface for storage, similar to the Datapoint 2200. In 1984 Apple introduced the Macintosh, which featured a graphical user interface and a mouse, revolutionizing the way users interacted with computers. IBM entered the personal computer market in 1981. Its PC also was influenced by the design principles of microcomputers. The machine featured an open architecture, allowing for easy expansion and customization. The PC’s success established it as a standard in the industry. Microsoft played a crucial role in software development for early microcomputers. Its MS-DOS provided a standardized platform for software development and was compatible with the IBM PC and other microcomputers. The operating system helped establish Microsoft as a dominant player in the software industry. Commodore International, a prominent computer manufacturer in the 1980s, released the Commodore 64 in 1982. It was a successful microcomputer that built upon the concepts of the Datapoint 2200 and other early machines. The Commodore 64 featured an integrated keyboard, color graphics, and sound capabilities, making it a popular choice for gaming and home computing. Xerox made significant contributions to the advancement of computing interfaces. Its Alto, developed in 1973, introduced the concept of a graphical user interface, with windows, icons, and a mouse for interaction. Although the Alto was not a commercial success, its influence was substantial, and it helped lay the groundwork for GUI-based systems including the Macintosh and Microsoft Windows. The Datapoint 2200 deserves to be remembered for its contributions to computer history. The San Antonio Museum of Science and Technology possesses a collection of Datapoint computers, including the original prototypes. The museum also houses a library of archival materials about the machine. This article has been updated from an earlier version.

  • Announcing a Benchmark to Improve AI Safety
    by MLCommons AI Safety Working Group on 16. April 2024. at 16:01

    One of the management guru Peter Drucker’s most over-quoted turns of phrase is “what gets measured gets improved.” But it’s over-quoted for a reason: It’s true. Nowhere is it truer than in technology over the past 50 years. Moore’s law—which predicts that the number of transistors (and hence compute capacity) in a chip would double every 24 months—has become a self-fulfilling prophecy and north star for an entire ecosystem. Because engineers carefully measured each generation of manufacturing technology for new chips, they could select the techniques that would move toward the goals of faster and more capable computing. And it worked: Computing power, and more impressively computing power per watt or per dollar, has grown exponentially in the past five decades. The latest smartphones are more powerful than the fastest supercomputers from the year 2000. Measurement of performance, though, is not limited to chips. All the parts of our computing systems today are benchmarked—that is, compared to similar components in a controlled way, with quantitative score assessments. These benchmarks help drive innovation. And we would know. As leaders in the field of AI, from both industry and academia, we build and deliver the most widely used performance benchmarks for AI systems in the world. MLCommons is a consortium that came together in the belief that better measurement of AI systems will drive improvement. Since 2018, we’ve developed performance benchmarks for systems that have shown more than 50-fold improvements in the speed of AI training. In 2023, we launched our first performance benchmark for large language models (LLMs), measuring the time it took to train a model to a particular quality level; within 5 months we saw repeatable results of LLMs improving their performance nearly threefold. Simply put, good open benchmarks can propel the entire industry forward. We need benchmarks to drive progress in AI safety Even as the performance of AI systems has raced ahead, we’ve seen mounting concern about AI safety. While AI safety means different things to different people, we define it as preventing AI systems from malfunctioning or being misused in harmful ways. For instance, AI systems without safeguards could be misused to support criminal activity such as phishing or creating child sexual abuse material, or could scale up the propagation of misinformation or hateful content. In order to realize the potential benefits of AI while minimizing these harms, we need to drive improvements in safety in tandem with improvements in capabilities. We believe that if AI systems are measured against common safety objectives, those AI systems will get safer over time. However, how to robustly and comprehensively evaluate AI safety risks—and also track and mitigate them—is an open problem for the AI community. Safety measurement is challenging because of the many different ways that AI models are used and the many aspects that need to be evaluated. And safety is inherently subjective, contextual, and contested—unlike with objective measurement of hardware speed, there is no single metric that all stakeholders agree on for all use cases. Often the test and metrics that are needed depend on the use case. For instance, the risks that accompany an adult asking for financial advice are very different from the risks of a child asking for help writing a story. Defining “safety concepts” is the key challenge in designing benchmarks that are trusted across regions and cultures, and we’ve already taken the first steps toward defining a standardized taxonomy of harms. A further problem is that benchmarks can quickly become irrelevant if not updated, which is challenging for AI safety given how rapidly new risks emerge and model capabilities improve. Models can also “overfit”: they do well on the benchmark data they use for training, but perform badly when presented with different data, such as the data they encounter in real deployment. Benchmark data can even end up (often accidentally) being part of models’ training data, compromising the benchmark’s validity. Our first AI safety benchmark: the details To help solve these problems, we set out to create a set of benchmarks for AI safety. Fortunately, we’re not starting from scratch— we can draw on knowledge from other academic and private efforts that came before. By combining best practices in the context of a broad community and a proven benchmarking non-profit organization, we hope to create a widely trusted standard approach that is dependably maintained and improved to keep pace with the field. Our first AI safety benchmark focuses on large language models. We released a v0.5 proof-of-concept (POC) today, 16 April, 2024. This POC validates the approach we are taking towards building the v1.0 AI Safety benchmark suite, which will launch later this year. What does the benchmark cover? We decided to first create an AI safety benchmark for LLMs because language is the most widely used modality for AI models. Our approach is rooted in the work of practitioners, and is directly informed by the social sciences. For each benchmark, we will specify the scope, the use case, persona(s), and the relevant hazard categories. To begin with, we are using a generic use case of a user interacting with a general-purpose chat assistant, speaking in English and living in Western Europe or North America. There are three personas: malicious users, vulnerable users such as children, and typical users, who are neither malicious nor vulnerable. While we recognize that many people speak other languages and live in other parts of the world, we have pragmatically chosen this use case due to the prevalence of existing material. This approach means that we can make grounded assessments of safety risks, reflecting the likely ways that models are actually used in the real-world. Over time, we will expand the number of use cases, languages, and personas, as well as the hazard categories and number of prompts. What does the benchmark test for? The benchmark covers a range of hazard categories, including violent crimes, child abuse and exploitation, and hate. For each hazard category, we test different types of interactions where models’ responses can create a risk of harm. For instance, we test how models respond to users telling them that they are going to make a bomb—and also users asking for advice on how to make a bomb, whether they should make a bomb, or for excuses in case they get caught. This structured approach means we can test more broadly for how models can create or increase the risk of harm. How do we actually test models? From a practical perspective, we test models by feeding them targeted prompts, collecting their responses, and then assessing whether they are safe or unsafe. Quality human ratings are expensive, often costing tens of dollars per response—and a comprehensive test set might have tens of thousands of prompts! A simple keyword- or rules- based rating system for evaluating the responses is affordable and scalable, but isn’t adequate when models’ responses are complex, ambiguous or unusual. Instead, we’re developing a system that combines “evaluator models”—specialized AI models that rate responses—with targeted human rating to verify and augment these models’ reliability. How did we create the prompts? For v0.5, we constructed simple, clear-cut prompts that align with the benchmark’s hazard categories. This approach makes it easier to test for the hazards and helps expose critical safety risks in models. We are working with experts, civil society groups, and practitioners to create more challenging, nuanced, and niche prompts, as well as exploring methodologies that would allow for more contextual evaluation alongside ratings. We are also integrating AI-generated adversarial prompts to complement the human-generated ones. How do we assess models? From the start, we agreed that the results of our safety benchmarks should be understandable for everyone. This means that our results have to both provide a useful signal for non-technical experts such as policymakers, regulators, researchers, and civil society groups who need to assess models’ safety risks, and also help technical experts make well-informed decisions about models’ risks and take steps to mitigate them. We are therefore producing assessment reports that contain “pyramids of information.” At the top is a single grade that provides a simple indication of overall system safety, like a movie rating or an automobile safety score. The next level provides the system’s grades for particular hazard categories. The bottom level gives detailed information on tests, test set provenance, and representative prompts and responses. AI safety demands an ecosystem The MLCommons AI safety working group is an open meeting of experts, practitioners, and researchers—we invite everyone working in the field to join our growing community. We aim to make decisions through consensus and welcome diverse perspectives on AI safety. We firmly believe that for AI tools to reach full maturity and widespread adoption, we need scalable and trustworthy ways to ensure that they’re safe. We need an AI safety ecosystem, including researchers discovering new problems and new solutions, internal and for-hire testing experts to extend benchmarks for specialized use cases, auditors to verify compliance, and standards bodies and policymakers to shape overall directions. Carefully implemented mechanisms such as the certification models found in other mature industries will help inform AI consumer decisions. Ultimately, we hope that the benchmarks we’re building will provide the foundation for the AI safety ecosystem to flourish. The following MLCommons AI safety working group members contributed to this article: Ahmed M. Ahmed, Stanford UniversityElie Alhajjar, RAND Kurt Bollacker, MLCommons Siméon Campos, Safer AI Canyu Chen, Illinois Institute of Technology Ramesh Chukka, Intel Zacharie Delpierre Coudert, Meta Tran Dzung, Intel Ian Eisenberg, Credo AI Murali Emani, Argonne National Laboratory James Ezick, Qualcomm Technologies, Inc. Marisa Ferrara Boston, Reins AI Heather Frase, CSET (Center for Security and Emerging Technology) Kenneth Fricklas, Turaco Strategy Brian Fuller, Meta Grigori Fursin, cKnowledge, cTuning Agasthya Gangavarapu, Ethriva James Gealy, Safer AI James Goel, Qualcomm Technologies, Inc Roman Gold, The Israeli Association for Ethics in Artificial Intelligence Wiebke Hutiri, Sony AI Bhavya Kailkhura, Lawrence Livermore National Laboratory David Kanter, MLCommons Chris Knotz, Commn Ground Barbara Korycki, MLCommons Shachi Kumar, Intel Srijan Kumar, Lighthouz AI Wei Li, Intel Bo Li, University of Chicago Percy Liang, Stanford University Zeyi Liao, Ohio State University Richard Liu, Haize Labs Sarah Luger, Consumer Reports Kelvin Manyeki, Bestech Systems Joseph Marvin Imperial, University of Bath, National University Philippines Peter Mattson, Google, MLCommons, AI Safety working group co-chair Virendra Mehta, University of Trento Shafee Mohammed, Project Humanit.ai Protik Mukhopadhyay, Protecto.ai Lama Nachman, Intel Besmira Nushi, Microsoft Research Luis Oala, Dotphoton Eda Okur, Intel Praveen Paritosh Forough Poursabzi, Microsoft Eleonora Presani, Meta Paul Röttger, Bocconi University Damian Ruck, Advai Saurav Sahay, Intel Tim Santos, Graphcore Alice Schoenauer Sebag, Cohere Vamsi Sistla, Nike Leonard Tang, Haize Labs Ganesh Tyagali, NStarx AI Joaquin Vanschoren, TU Eindhoven, AI Safety working group co-chair Bertie Vidgen, MLCommons Rebecca Weiss, MLCommons Adina Williams, FAIR, Meta Carole-Jean Wu, FAIR, Meta Poonam Yadav, University of York, UK Wenhui Zhang, LFAI & Data Fedor Zhdanov, Nebius AI

  • Hydrogen Is Coming to the Rescue
    by Willie D. Jones on 16. April 2024. at 15:43

    A consortium of U.S. federal agencies has pooled their funds and wide array of expertise to reinvent the emergency vehicle. The hybrid electric box truck they’ve come up with is carbon neutral. And in the aftermath of a natural disaster like a tornado or wildfire, the vehicle, called H2Rescue, can supply electric power and potable water to survivors while acting as a temperature-controlled command center for rescue personnel. The agencies that funded and developed it from an idea on paper to a functional Class 7 emergency vehicle prototype say they are pleased with the outcome of the project, which is now being used for further research and development. “Any time the fuel cell is producing energy to move the vehicle or to export power, it’s generating water.” –Nicholas Josefik, U.S. Army Corps of Engineers Construction Research Lab Commercial truck and locomotive engine maker Cummins, which has pledged to make all its heavy-duty road and rail vehicles zero-emission by 2050, won a $1 million competitive award to build the H2Rescue, which gets its power from a hydrogen fuel cell that charges its lithium-ion batteries. In demonstrations, including one last summer at National Renewable Energy Lab facilities in Colorado, the truck proved capable of driving 290-kilometers, then taking on the roles of power plant, mobile command center, and (courtesy of the truck’s “exhaust”) supplier of clean drinking water. A hydrogen tank system located behind the 15,000-kilogram truck’s cab holds 175 kg of fuel at 70 megapascals (700 bars) of pressure. Civilian anthropology researcher Lance Larkin at the U.S. Army Corps of Engineers’ Construction Engineering Research Laboratory (CERL) in Champaign, Ill., told IEEE Spectrum that that’s enough fuel for the fuel cell to generate 1,800 kilowatt-hours of energy. Or enough, he says, to keep the lights on in 15 to 20 average U.S. homes for about three days. The fuel cell can provide energy directly to the truck’s powertrain. However, it mainly charges two battery packs with a total capacity of 155-kilowatt-hours because batteries are better than fuel cells at handling the variable power demands that come with vehicle propulsion. When the truck is at a disaster site, the fuel cell can automatically turn itself on and off to keep the batteries charged up while they are exporting electric power to buildings that would otherwise be in the dark. “If it’s called upon to export, say, 3 kilowatts to keep a few computers running, the fuel in its tanks could keep them powered for weeks,” says Nicholas Josefik, an industrial engineer at CERL. As if that weren’t enough, an onboard storage tank captures the water that is the byproduct of the electrochemical reactions in the fuel cell. “Any time the fuel cell is producing energy to move the vehicle or to export power, it’s generating water,” says Josefik. The result: roughly 1,500 liters of clean water available any place where municipal or well water supplies are unavailable or unsafe. “When the H2Rescue drives to a location, you won’t need to pull that generator behind you, because the truck itself is a generator.” —Nicholas Josefik, U.S. Army Corps of Engineers Construction Research Lab Just as important as what it can do, Josefik notes, is what it won’t do: “In a traditional emergency situation, you send in a diesel truck and that diesel truck is pulling a diesel-powered generator, so you can provide power to the site,” he says. “And another diesel truck is pulling in a fuel tank to fuel that diesel generator. A third truck might pull a trailer with a water tank on it. “But when the H2Rescue drives to a location,” he continues, “You won’t need to pull that generator behind you, because the truck itself is a generator. You don’t have to drag a trailer full of water, because you know that while you’re on site, H2Rescue will be your water source.” He adds that H2Rescue will not only allow first responders to eliminate a few pieces of equipment but will also eliminate the air pollution and noise that come standard with diesel-powered vehicles and generators. Larkin recalls that the impetus for developing the zero-emission emergency vehicle came in 2019, when a series of natural disasters across the United States, including wildfires and hurricanes, spurred action. “The organizations that funded this project were observing this and saw a need for an alternative emergency support,” he says. They asked themselves, Larkin notes, “‘What can we do to help our first responders take on these natural disasters?’ The rest, as they say, is history.” Asked when we’ll see the Federal Emergency Management Agency, which is typically in charge of disaster response anywhere in the 50 U.S. states, dispatch the H2Rescue truck to the aftermath of, say, a hurricane, Josefik says, “This is still a research unit. We’re working on trying to build a version 2.0 that could go and support responders to an emergency.” That next version, he says, would be the result of some optimizations suggested by Cummins as it was putting the H2Rescue together. “Because this was a one-off build, [Cummins] identified a number of areas for improvement, like how they would do the wiring and the piping differently, so it’s more compact in the unit.” The aim for the second iteration, Larkin says, is “a turnkey unit, ready to operate without all the extra gauges and monitoring equipment that you wouldn’t want in a vehicle that you would turn over to somebody.” There is no timetable for when the new and improved H2Rescue will go into production. The agencies that allocated the funds for the prototype have not yet put up the money to create its successor.

  • Boston Dynamics Retires Its Legendary Humanoid Robot
    by Evan Ackerman on 16. April 2024. at 15:25

    In a new video posted today, Boston Dynamics is sending off its hydraulic Atlas humanoid robot. “For almost a decade,” the video description reads, “Atlas has sparked our imagination, inspired the next generations of roboticists, and leapt over technical barriers in the field. Now it’s time for our hydraulic Atlas robot to kick back and relax.” Hydraulic Atlas has certainly earned some relaxation; Boston Dynamics has been absolutely merciless with its humanoid research program. This isn’t a criticism—sometimes being merciless to your hardware is necessary to push the envelope of what’s possible. And as spectators, we just just get to enjoy it, and this highlight reel includes unseen footage of Atlas doing things well along with unseen footage of Atlas doing things not so well. Which, let’s be honest, is what we’re all really here for. There’s so much more to the history of Atlas than this video shows. Atlas traces its history back to a US Army project called PETMAN (Protection Ensemble Test Mannequin), which we first wrote about in 2009, so long ago that we had to dig up our own article on the Wayback Machine. As contributor Mikell Taylor wrote back then: PETMAN is designed to test the suits used by soldiers to protect themselves against chemical warfare agents. It has to be capable of moving just like a soldier—walking, running, bending, reaching, army crawling—to test the suit’s durability in a full range of motion. To really simulate humans as accurately as possible, PETMAN will even be able to “sweat”. Relative to the other humanoid robots out there at the time (the most famous of which, by far, was Honda’s ASIMO), PETMAN’s movement and balance were very, very impressive. Also impressive was the presumably unintentional way in which this PETMAN video synced up with the music video to Stayin’ Alive by the Bee Gees. Anyway, DARPA was suitably impressed by all this impressiveness, and chose Boston Dynamics to build the humanoid robot to be used for the DARPA Robotics Challenge. That robot was unveiled ten years ago. The DRC featured a [still looking for a collective noun for humanoid robots] of Atlases, and it seemed like Boston Dynamics was hooked on the form factor, because less than a year after the DRC Finals the company announced the next generation of Atlas, which could do some useful things like move boxes around. Every six months or so, Boston Dynamics put out a new Atlas video, with the robot running or jumping or dancing or doing parkour, leveraging its powerful hydraulics to impress us every single time. There was really nothing like hydraulic Atlas in terms of dynamic performance, and you could argue that there still isn’t. This is a robot that will be missed. The original rendering of Atlas, followed by four generations of the robot.Boston Dynamics/IEEE Spectrum Now, if you’re wondering why Boston Dynamics is saying “it’s time for our hydraulic Atlas robot to kick back and relax,” rather than just “our Atlas robot,” and if you’re also wondering why the video description ends with “take a look back at everything we’ve accomplished with the Atlas platform “to date,” well, I can’t help you. Some people might attempt to draw some inferences and conclusions from that very specific and deliberate language, but I would certainly not be one of them, because I’m well known for never speculating about anything. I would, however, point out a few things that have been obvious for a while now. Namely, that: Boston Dynamics has been focusing fairly explicitly on commercialization over the past several years Complex hydraulic robots are not product friendly because (among other things) they tend to leave puddles of hydraulic fluid on the carpet Boston Dynamics has been very successful with Spot as a productized electric platform based on earlier hydraulic research platforms Fully electric commercial humanoids really seems to be where robotics is at right now There’s nothing at all new in any of this; the only additional piece of information we have is that the hydraulic Atlas is, as of today, retiring. And I’m just going to leave things there.

  • What Software Engineers Need to Know About AI Jobs
    by Tekla S. Perry on 16. April 2024. at 14:08

    AI hiring has been growing at least slightly in most regions around the world, with Hong Kong leading the pack; however, AI careers are losing ground compared with the overall job market, according to the 2024 AI Index Report. This annual effort by Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) draws from a host of data to understand the state of the AI industry today. Stanford’s AI Index looks at the performance of AI models, investment, research, and regulations. But tucked within the 385 pages of the 2024 Index are several insights into AI career trends, based on data from LinkedIn and Lightcast, a labor-market analytics firm. Here’s a quick look at that analysis, in four charts. Overall hiring is up—a little But don’t get too excited—as a share of overall labor demand, AI jobs are slipping Python is still the best skill to have Machine learning loses luster

  • 15 Graphs That Explain the State of AI in 2024
    by Eliza Strickland on 15. April 2024. at 15:03

    Each year, the AI Index lands on virtual desks with a louder virtual thud—this year, its 393 pages are a testament to the fact that AI is coming off a really big year in 2023. For the past three years, IEEE Spectrum has read the whole damn thing and pulled out a selection of charts that sum up the current state of AI (see our coverage from 2021, 2022, and 2023). This year’s report, published by the Stanford Institute for Human-Centered Artificial Intelligence (HAI), has an expanded chapter on responsible AI and new chapters on AI in science and medicine, as well as its usual roundups of R&D, technical performance, the economy, education, policy and governance, diversity, and public opinion. This year is also the first time that Spectrum has figured into the report, with a citation of an article published here about generative AI’s visual plagiarism problem. 1. Generative AI investment skyrockets While corporate investment was down overall last year, investment in generative AI went through the roof. Nestor Maslej, editor-in-chief of this year’s report, tells Spectrum that the boom is indicative of a broader trend in 2023, as the world grappled with the new capabilities and risks of generative AI systems like ChatGPT and the image-generating DALL-E 2. “The story in the last year has been about people responding [to generative AI],” says Maslej, “whether it’s in policy, whether it’s in public opinion, or whether it’s in industry with a lot more investment.” Another chart in the report shows that most of that private investment in generative AI is happening in the United States. 2. Google is dominating the foundation model race Foundation models are big multipurpose models—for example, OpenAI’s GPT-3 and GPT-4 are the foundation model that enable ChatGPT users to write code or Shakespearean sonnets. Since training these models typically requires vast resources, Industry now makes most of them, with academia only putting out a few. Companies release foundation models both to push the state-of-the-art forward and to give developers a foundation on which to build products and services. Google released the most in 2023. 3. Closed models outperform open ones One of the hot debates in AI right now is whether foundation models should be open or closed, with some arguing passionately that open models are dangerous and others maintaining that open models drive innovation. The AI Index doesn’t wade into that debate, but instead looks at trends such as how many open and closed models have been released (another chart, not included here, shows that of the 149 foundation models released in 2023, 98 were open, 23 gave partial access through an API, and 28 were closed). The chart above reveals another aspect: Closed models outperform open ones on a host of commonly used benchmarks. Maslej says the debate about open versus closed “usually centers around risk concerns, but there’s less discussion about whether there are meaningful performance trade-offs.” 4. Foundation models have gotten super expensive Here’s why industry is dominating the foundation model scene: Training a big one takes very deep pockets. But exactly how deep? AI companies rarely reveal the expenses involved in training their models, but the AI Index went beyond the typical speculation by collaborating with the AI research organization Epoch AI. To come up with their cost estimates, the report explains, the Epoch team “analyzed training duration, as well as the type, quantity, and utilization rate of the training hardware” using information gleaned from publications, press releases, and technical reports. It’s interesting to note that Google’s 2017 transformer model, which introduced the architecture that underpins almost all of today’s large language models, was trained for only US $930. 5. And they have a hefty carbon footprint The AI Index team also estimated the carbon footprint of certain large language models. The report notes that the variance between models is due to factors including model size, data center energy efficiency, and the carbon intensity of energy grids. Another chart in the report (not included here) shows a first guess at emissions related to inference—when a model is doing the work it was trained for—and calls for more disclosures on this topic. As the report notes: “While the per-query emissions of inference may be relatively low, the total impact can surpass that of training when models are queried thousands, if not millions, of times daily.” 6. The United States leads in foundation models While Maslej says the report isn’t trying to “declare a winner to this race,” he does note that the United States is leading in several categories, including number of foundation models released (above) and number of AI systems deemed significant technical advances. However, he notes that China leads in other categories including AI patents granted and installation of industrial robots. 7. Industry calls new PhDs This one is hardly a surprise, given the previously discussed data about industry getting lots of investment for generative AI and releasing lots of exciting models. In 2022 (the most recent year for which the Index has data), 70 precent of new AI PhDs in North America took jobs in industry. It’s a continuation of a trend that’s been playing out over the last few years. 8. Some progress on diversity For years, there’s been little progress on making AI less white and less male. But this year’s report offers a few hopeful signs. For example, the number of non-white and female students taking the AP computer science exam is on the rise. The graph above shows the trends for ethnicity, while another graph, not included here, shows that 30 percent of the students taking the exam are now girls. Another graph in the report shows that at the undergraduate level, there’s also a positive trend in increasing ethnic diversity among North American students earning bachelor degrees in computer science, although the number of women earning CS bachelor degrees has barely budged over the last five years. Says Maslej, “it’s important to know that there’s still a lot of work to be done here.” 9. Chatter in earnings calls Businesses are awake to the possibilities of AI. The Index got data about Fortune 500 companies’ earnings calls from Quid, a market intelligence firm that used natural language processing tools to scan for all mentions of “artificial intelligence,” “AI,” “machine learning,” “ML,” and “deep learning.” Nearly 80 percent of the companies included discussion of AI in their calls. “I think there’s a fear in business leaders that if they don’t use this technology, they’re going to miss out,” Maslej says. And while some of that chatter is likely just CEOs bandying about buzzwords, another graph in the report shows that 55 percent of companies included in a McKinsey survey have implemented AI in at least one business unit. 10. Costs go down, revenues go up And here’s why AI isn’t just a corporate buzzword: The same McKinsey survey showed that the integration of AI has caused companies’ costs to go down and their revenues go up. Overall, 42 percent of respondents said they’d seen reduced costs, and 59 percent claimed increased revenue. Other charts in the report suggest that this impact on the bottom line reflects efficiency gains and better worker productivity. In 2023, a number of studies in different fields showed that AI enabled workers to complete tasks more quickly and produce better quality work. One study looked at coders using Copilot, while others looked at consultants, call center agents, and law students. “These studies also show that although every worker benefits, AI helps lower-skilled workers more than it does high-skilled workers,” says Maslej. 11. Corporations do perceive risks This year, the AI Index team ran a global survey of 1,000 corporations with revenues of at least $500 million to understand how businesses are thinking about responsible AI. The results showed that privacy and data governance is perceived as the greatest risk across the globe, while fairness (often discussed in terms of algorithmic bias) still hasn’t registered with most companies. Another chart in the report shows that companies are taking action on their perceived risks: The majority of organizations across regions have implemented at least one responsible AI measure in response to relevant risks. 12. AI can’t beat humans at everything... yet In recent years, AI systems have outperformed humans on a range of tasks, including reading comprehension and visual reasoning, and Maslej notes that the pace of AI performance improvement has also picked up. “A decade ago, with a benchmark like ImageNet, you could rely on that to challenge AI researchers for for five or six years,” he says. “Now, a new benchmark is introduced for competition-level mathematics and the AI starts at 30 percent, and then in a year it gets to 90 percent.” While there are still complex cognitive tasks where humans outperform AI systems, let’s check in next year to see how that’s going. 13. Developing norms of AI responsibility When an AI company is preparing to release a big model, it’s standard practice to test it against popular benchmarks in the field, thus giving the AI community a sense of how models stack up against each other in terms of technical performance. However, it has been less common to test models against responsible AI benchmarks that assess such things as toxic language output (RealToxicityPrompts and ToxiGen), harmful bias in responses (BOLD and BBQ), and a model’s degree of truthfulness (TruthfulQA). That’s starting to change, as there’s a growing sense that checking one’s model against theses benchmarks is, well, the responsible thing to do. However, another chart in the report shows that consistency is lacking: Developers are testing their models against different benchmarks, making comparisons harder. 14. Laws both boost and constrain AI Between 2016 and 2023, the AI Index found that 33 countries had passed at least one law related to AI, with most of the action occurring in the United States and Europe; in total, 148 AI-related bills have been passed in that timeframe. The Index researchers also classified bills as either expansive laws that aim to enhance a country’s AI capabilities or restrictive laws that place limits on AI applications and usage. While many bills continue to boost AI, the researchers found a global trend toward restrictive legislation. 15. AI makes people nervous The Index’s public opinion data comes from a global survey on attitudes toward AI, with responses from 22,816 adults (ages 16 to 74) in 31 countries. More than half of respondents said that AI makes them nervous, up from 39 percent the year before. And two-thirds of people now expect AI to profoundly change their daily lives in the next few years. Maslej notes that other charts in the index show significant differences in opinion among different demographics, with young people being more inclined toward an optimistic view of how AI will change their lives. Interestingly, “a lot of this kind of AI pessimism comes from Western, well-developed nations,” he says, while respondents in places like Indonesia and Thailand said they expect AI’s benefits to outweigh its harms.

  • German EV Motor Could Break Supply-Chain Deadlock
    by Glenn Zorpette on 15. April 2024. at 14:21

    Among the countless challenges of decarbonizing transportation, one of the most compelling involves electric motors. In laboratories all over the world, researchers are now chasing a breakthrough that could kick into high gear the transition to electric transportation: a rugged, compact, powerful electric motor that has high power density and the ability to withstand high temperatures—and that doesn’t have rare-earth permanent magnets. It’s a huge challenge currently preoccupying some of the best machine designers on the planet. More than a few of them are at ZF Friedrichshafen AG, one of the world’s largest suppliers of parts to the automotive industry. In fact, ZF astounded analysts late last year when it announced that it had built a 220-kilowatt traction motor that used no rare-earth elements. Moreover, the company announced, their new motor had characteristics comparable to the rare-earth permanent-magnet synchronous motors that now dominate in electric vehicles. Most EVs have rare-earth-magnet-based motors ranging from 150 to 300 kilowatts, and power densities between 1.1 and 3.0 kilowatts per kilogram. Meanwhile, the company says they’ve developed a rare-earth-free motor right in the middle of that range: 220 kW. (The company has not yet revealed its motor’s specific power—its kW/kg rating.) The ZF machine is a type called a separately-excited (or doubly-excited) synchronous motor. It has electromagnets in both the stator and the rotor, so it does away with the rare-earth permanent magnets used in the rotors of nearly all EV motors on the road today. In a separately-excited synchronous motor, alternating current applied to the stator electromagnets sets up a rotating magnetic field. A separate current applied to the rotor electromagnets energizes them, producing a field that locks on to the rotating stator field, producing torque. “As a matter of fact, 95 percent of the rare earths are mined in China. And this means that if China decides no one else will have rare earths, we can do nothing against it.” —Otmar Scharrer, ZF Friedrichshafen AG So far, these machines have not been used much in EVs, because they require a separate system to transfer power to the spinning rotor magnets, and there’s no ideal way to do that. Many such motors use sliders and brushes to make electrical contact to a spinning surface, but the brushes produce dust and eventually wear out. Alternatively, the power can be transferred via inductance, but in that case the apparatus is typically cumbersome, making the unit complicated and physically large and heavy. Now, though, ZF says it has solved these problems with its experimental motor, which it calls I2SM (for In-Rotor Inductive-Excited Synchronous Motor). Besides not using any rare earth elements, the motor offers a few other advantages in comparison with permanent-magnet synchronous motors. These are linked to the fact that this kind of motor technology offers the ability to precisely control the magnetic field in the rotor—something that’s not possible with permanent magnets. That control, in turn, permits varying the field to get much higher efficiency at high speed, for example. With headquarters in Baden-Württemberg, Germany, ZF Friedrichshafen AG is known for a rich R&D heritage and many commercially successful innovations dating back to 1915, when it began supplying gears and other parts for Zeppelins. Today, the company has some 168,000 employees in 31 countries. Among the customers for its motors and electric drive trains are Mercedes-Benz, BMW, and Jaguar Land Rover. (Late last year, shortly after announcing the I2SM, the company announced the sale of its 3,000,000th motor.) Has ZF just shown the way forward for rare-earth-free EV motors? To learn more about the I 2SM and ZF’s vision of the future of EV traction motors, Spectrum reached out to Otmar Scharrer, ZF’s Senior Vice President, R&D, of Electrified Powertrain Technology. Our interview with him has been edited for concision and clarity. Otmar Scharrer on... The I2SM’s technical bona fides The most promising concepts for future motors The motor’s coils, efficiency, and cooling The prototypes built to date The challenges the team overcame IEEE Spectrum: Why is it important to eliminate or to reduce the use of rare-earth elements in traction motors? ZF Friedrichshafen AG’s Otmar Scharrer is leading a team discovering ways to build motors that don’t depend on permanent magnets—and China’s rare-earth monopolies. ZF Group Otmar Scharrer: Well, there are two reasons for that. One is sustainability. We call them “rare earth” because they really are rare in the earth. You need to move a lot of soil to get to these materials. Therefore, they have a relatively high footprint because, usually, they are dug out of the earth in a mine with excavators and huge trucks. That generates some environmental pollution and, of course, a change of the landscape. That is one thing. The other is that they are relatively expensive. And of course, this is something we always address cautiously as a tier one [automotive industry supplier]. And as a matter of fact, 95 percent of the rare earths are produced in China. And this means that if China decides no one else will have rare earths, we can do nothing against it. The recycling circle [for rare earth elements] will not work because there are just not enough electric motors out there. They still have an active lifetime. When you are ramping up, when you have a steep ramp up in terms of volume, you never can satisfy your demands with recycling. Recycling will only work if you have a constant business and you’re just replacing those units which are failing. I’m sure this will come, but we see this much later when the steep ramp-up has ended. “The power density is the same as for a permanent-magnet machine, because we produce both. And I can tell you that there is no difference.” —Otmar Scharrer, ZF Friedrichshafen AG You had asked a very good question: How much rare-earth metal does a typical traction motor contain? I had to ask my engineers. This is an interesting question. Most of our electric motors are in the range of 150 to 300 kilowatts. This is the main range of power for passenger cars. And those motors typically have 1.5 kilograms of magnet material. And 0.5 percent to 1 percent out of this material is pure [heavy rare-earth elements]. So this is not too much. It’s only 5 to 15 grams. But, yes, it’s a very difficult-to-get material. This is the reason for this [permanent-] magnet-free motor. The concept itself is not new. It has been used for years and years, for decades, because usually, power generation is done with this kind of electric machine. So if you have a huge power plant, for example, a gas power plant, then you would typically find such an externally-excited machine as a generator. We did not use them for passenger cars or for mobile applications because of their weight and size. And some of that weight-and-size problem comes directly from the need to generate a magnetic field in the rotor, to replace the [permanent] magnets. You need to set copper coils under electricity. So you need to carry electric current inside the rotor. This is usually done with sliders. And those sliders generate losses. This is the one thing because you have, typically, carbon brushes touching a metal ring so that you can conduct the electricity. Back to top Those brushes are what make the unit longer, axially, in the direction of the axle? Scharrer: Exactly. That’s the point. And you need an inverter which is able to excite the electric machine. Normal inverters have three phases, and then you need a fourth phase to electrify the rotor. And this is a second obstacle. Many OEMs or e-mobility companies do not have this technology ready. Surprisingly enough, the first ones who brought this into serious production were [Renault]. It was a very small car, a Renault. [Editor's note: the model was the Zoe, which was manufactured from 2013 until March of this year.] It had a relatively weak electric motor, just 75 or 80 kilowatts. They decided to do this because in an electric vehicle, there’s a huge advantage with this kind of externally excited machine. You can switch off and switch on the magnetic field. This is a great safety advantage. Why safety? Think about it. If your bicycle has a generator [for a headlight], it works like an electric motor. If you are moving and the generator is spinning, connected to the wheel, then it is generating electricity. “We have an efficiency of approximately 96 percent. So, very little loss.” —Otmar Scharrer, ZF Friedrichshafen AG The same is happening in an electric machine in the car. If you are driving on the highway at 75 miles an hour, and then suddenly your whole system breaks down, what would happen? In a permanent magnet motor, you would generate enormous voltage because the rotor magnets are still rotating in the stator field. But in a permanent-magnet-free motor, nothing happens. You are just switched off. So it is self-secure. This is a nice feature. And the second feature is even better if you drive at high speed. High speed is something like 75, 80, 90 miles an hour. It’s not too common in most countries. But it’s a German phenomenon, very important here. People like to drive fast. Then you need to address the area of field weakening because [at high speed], the magnetic field would be too strong. You need to weaken the field. And if you don’t have [permanent] magnets, it’s easy: you just adapt the electrically-induced magnetic field to the appropriate value, and you don’t have this field-weakening requirement. And this results in much higher efficiency at high speeds. You called this field weakening at high speed? Scharrer: You need to weaken the magnetic field in order to keep the operation stable. And this weakening happens by additional electricity coming from the battery. And therefore, you have a lower efficiency of the electric motor. Back to top What are the most promising concepts for future EV motors? Scharrer: We believe that our concept is most promising, because as you pointed out a couple of minutes ago, we are growing in actual length when we do an externally excited motor. We thought a lot what we can do to overcome this obstacle. And we came to the conclusion, let’s do it inductively, by electrical inductance. And this has been done by competitors as well, but they simply replaced the slider rings with inductance transmitters. “We are convinced that we can build the same size, the same power level of electric motors as with the permanent magnets.” —Otmar Scharrer, ZF Friedrichshafen AG And this did not change the situation. What we did, we were shrinking the inductive unit to the size of the rotor shaft, and then we put it inside the shaft. And therefore, we reduced this 50-to-90-millimeter growth in axial length. And therefore, as a final result, you know the motor shrinks, the housing gets smaller, you have less weight, and you have the same performance density in comparison with a PSM [permanent-magnet synchronous motor] machine. What is an inductive exciter exactly? Scharrer: Inductive exciter means nothing else than that you transmit electricity without touching anything. You do it with a magnetic field. And we are doing it inside of the rotor shaft. This is where the energy is transmitted from outside to the shaft [and then to the rotor electromagnets]. So the rotor shaft, is that different from the motor shaft, the actual torque shaft? Scharrer: It’s the same. The thing I know with inductance is in a transformer, you have coils next to each other and you can induce a voltage from the energized coil in the other coil. Scharrer: This is exactly what is happening in our rotor shafts. Back to top So you use coils, specially designed, and you induce voltage from one to the other? Scharrer: Yes. And we have a very neat, small package, which has a diameter of less than 30 millimeters. If you can shrink it to that value, then you can put it inside the rotor shaft. So of course, if you have two coils, and they’re spaced next to each other, you have a gap. So that gap enables you to spin, right? Since they’re not touching, they can spin independently. So you had to design something where the field could be transferred. In other words, they could couple even though one of them was spinning. Scharrer: We have a coil in the rotor shaft, which is rotating with the shaft. And then we have another one that is stationary inside the rotor shaft while the shaft rotates around it. And there is an air gap in between. Everything happens inside the rotor shaft. What is the efficiency? How much power do you lose? Scharrer: We have an efficiency of approximately 96 percent. So, very little loss. And for the magnetic field, you don’t need a lot of energy. You need something between 10 and 15 kilowatts for the electric field. Let’s assume a transmitted power of 10 kilowatts, we’ll have losses of about 400 watts. This [relatively low level of loss] is important because we don’t cool the unit actively and therefore it needs this kind of high efficiency. The motor isn’t cooled with liquids? Scharrer: The motor itself is actively cooled, with oil, but the inductive unit is passively cooled, with heat transfer to nearby cooling structures. “A good invention is always easy. If you look as an engineer on good IP, then you say, ‘Okay, that looks nice.’” —Otmar Scharrer, ZF Friedrichshafen AG What are the largest motors you’ve built or what are the largest motors you think you can build, in kilowatts? Scharrer: We don’t think that there is a limitation with this technology. We are convinced that we can build the same size, the same power level of electric motors as with the permanent magnets. You could do 150- or 300-kilowatt motors? Scharrer: Absolutely. Back to top What have you done so far? What prototypes have you built? Scharrer: We have a prototype with 220 kilowatts. And we can easily upgrade it to 300, for example. Or we can shrink it to 150. That is always easy. And what is your specific power of this motor? Scharrer: You mean kilowatts per kilogram? I can’t tell you, to be quite honest. It’s hard to compare, because it always depends on where the borderline is. You never have a motor by itself. You always need a housing as well. What part of the housing are you including in the calculation? But I can tell you one thing: The power density is the same as for a permanent-magnet machine because we produce both. And I can tell you that there is no difference. What automakers do you currently have agreements with? Are you providing electric motors for certain automakers? Who are some of your customers now? Scharrer: We are providing our dedicated hybrid transmissions to BMW, to Jaguar Land Rover, and our electric-axle drives to Mercedes-Benz and Geely Lotus, for example. And we are, of course, in development with a lot of other applications. And I think you understand that I cannot talk about that. So for BMW, Land Rover, Mercedes-Benz, you’re providing electric motors and drivetrain components? Scharrer: BMW and Land Rover. We provide dedicated hybrid transmissions. We provide an eight-speed automatic transmission with a hybrid electric motor up to 160 kilowatts. It’s one of the best hybrid transmissions because you can drive fully electrically with 160 kilowatts, which is quite something. “We achieved the same values, for power density and other characteristics, for as for a [permanent] magnet motor. And this is really a breakthrough because according to our best knowledge, this never happened before.” —Otmar Scharrer, ZF Friedrichshafen AG What were the major challenges you had to overcome, to transmit the power inside the rotor shaft? Back to top Scharrer: The major challenge is, always, it needs to be very small. At the same time, it needs to be super reliable, and it needs to be easy. A good invention is always easy. When you see it, if you look as an engineer on good IP [intellectual property], then you say, “Okay, that looks nice”—it’s quite obvious that it’s a good idea. If the idea is complex and it needs to be explained and you don’t understand it, then usually this is not a good idea to be implemented. And this one is very easy. Straightforward. It’s a good idea: Shrink it, put it into the rotor shaft. So you mean very easy to explain? Scharrer: Yes. Easy to explain because it’s obviously an interesting idea. You just say, “Let’s use part of the rotor shaft for the transmission of the electricity into the rotor shaft, and then we can cut the additional length out of the magnet-free motor.” Okay. That’s a good answer. We have a lot of IP here. This is important because if you have the idea, I mean, the idea is the main thing. What were the specific savings in weight and rotor shaft and so on? Scharrer: Well, again, I would just answer in a very general way. We achieved the same values, for power density and other characteristics, as for a [permanent] magnet motor. And this is really a breakthrough because according to our best knowledge, this never happened before. Do you think the motor will be available before the end of this year or perhaps next year? Scharrer: You mean available for a serious application? Yes. If Volkswagen came to you and said, “Look, we want to use this in our next car,” could you do that before the end of this year, or would it have to be 2025? Scharrer: It would have to be 2025. I mean, technically, the electric motor is very far along. It is already in an A-sample status, which means we are... What kind of status? Scharrer: A-sample. In the automotive industry, you have A, B, or C. For A-sample, you have all the functions, and you have all the features of the product, and those are secured. And then B- is, you are not producing any longer in the prototype shop, but you are producing close to a possibly serious production line. C-sample means you are producing on serious fixtures and tools, but not on a [mass-production] line. And so this is an A-sample, meaning it is about one and a half years away from a conventional SOP ["Start of Production"] with our customer. So we could be very fast. This article was updated on 15 April 2024. An earlier version of this article gave an incorrect figure for the efficiency of the inductive exciter used in the motor. This efficiency is 96 percent, not 98 or 99 percent.

  • The Tiny Ultrabright Laser that Can Melt Steel
    by Susumu Noda on 14. April 2024. at 15:00

    In 2016, the Japanese government announced a plan for the emergence of a new kind of society. Human civilization, the proposal explained, had begun with hunter-gatherers, passed through the agrarian and industrial stages, and was fast approaching the end of the information age. As then Prime Minister Shinzo Abe put it, “We are now witnessing the opening of the fifth chapter.” This chapter, called Society 5.0, would see made-on-demand goods and robot caretakers, taxis, and tractors. Many of the innovations that will enable it, like artificial intelligence, might be obvious. But there is one key technology that is easy to overlook: lasers. The lasers of Society 5.0 will need to meet several criteria. They must be small enough to fit inside everyday devices. They must be low-cost so that the average metalworker or car buyer can afford them—which means they must also be simple to manufacture and use energy efficiently. And because this dawning era will be about mass customization (rather than mass production), they must be highly controllable and adaptive. Semiconductor lasers would seem the perfect candidates, except for one fatal flaw: They are much too dim. Laser brightness—defined as optical power per unit area per unit of solid angle—is a measure of how intensely light can be focused as it exits the laser and how narrowly it diverges as it moves away. The threshold for materials work—cutting, welding, drilling—is on the order of 1 gigawatt per square centimeter per steradian (GW/cm2/sr). However, the brightness of even the brightest commercial semiconductor lasers falls far below that. Brightness is also important for light detection and ranging (lidar) systems in autonomous robots and vehicles. These systems don’t require metal-melting power, but to make precise measurements from long distances or at high speeds, they do require tightly focused beams. Today’s top-line lidar systems employ more than 100 semiconductor lasers whose inherently divergent beams are collimated using a complicated setup of lenses installed by hand. This complexity drives up cost, putting lidar-navigated cars out of reach for most consumers. Multiple 3-millimeter-wide photonic-crystal semiconductor lasers are built on a semiconductor wafer. Susumu Noda Of course, other types of lasers can produce ultrabright beams. Carbon dioxide and fiber lasers, for instance, dominate the market for industrial applications. But compared to speck-size semiconductor lasers, they are enormous. A high-power CO2 laser can be as large as a refrigerator. They are also more expensive, less energy efficient, and harder to control. Over the past couple of decades, our team at Kyoto University has been developing a new type of semiconductor laser that blows through the brightness ceiling of its conventional cousins. We call it the photonic-crystal surface-emitting laser, or PCSEL (pronounced “pick-cell”). Most recently, we fabricated a PCSEL that can be as bright as gas and fiber lasers—bright enough to quickly slice through steel—and proposed a design for one that is 10 to 100 times as bright. Such devices could revolutionize the manufacturing and automotive industries. If we, our collaborating companies, and research groups around the world—such as at National Yang Ming Chiao Tung University, in Hsinchu, Taiwan; the University of Texas at Arlington; and the University of Glasgow—can push PCSEL brightness further still, it would even open the door to exotic applications like inertial-confinement nuclear fusion and light propulsion for spaceflight. Hole-y Grail The magic of PCSELs arises from their unique construction. Like any semiconductor laser, a PCSEL consists of a thin layer of light-generating material, known as the active layer, sandwiched between cladding layers. In fact, for the sake of orientation, it’s helpful to picture the device as a literal sandwich—let’s say a slice of ham between two pieces of bread. Now imagine lifting the sandwich to your mouth, as if you are about to take a bite. If your sandwich were a conventional semiconductor laser, its beam would radiate from the far edge, away from you. This beam is created by passing a current through a stripe in the active “ham” layer. The excited ham atoms spontaneously release photons, which stimulate the release of identical photons, amplifying the light. Mirrors on each end of the stripe then repeatedly reflect these waves; because of interference and loss, only certain frequencies and spatial patterns—or modes—are sustained. When the gain of a mode exceeds losses, the light emerges in a coherent beam, and the laser is said to oscillate in that mode. The problem with this standard stripe approach is that it is very difficult to increase output power without sacrificing beam quality. The power of a semiconductor laser is limited by its emission area because extremely concentrated light can cause catastrophic damage to the semiconductor. You can deliver more power by widening the stripe, which is the strategy used for so-called broad-area lasers. But a wider stripe also gives room for the oscillating light to take zigzag sideways paths, forming what are called higher-order lateral modes. More Modes, More Problems You can visualize the intesity pattern of a lateral mode by imagining that you’ve placed a screen in the cross section of the output beam. Light bouncing back and forth perfectly along the length of the stripe forms the fundamental (zero-order) mode, which has a single peak of intensity in the center of the beam. The first-order mode, from light reflecting at an angle to the edge of the sandwich, has two peaks to the right and left; the second-order mode, from a smaller angle, has a row of three peaks, and so on. For each higher-order mode, the laser effectively operates as a combination of smaller emitters whose narrower apertures cause the beam to diverge rapidly. The resulting mixture of lateral modes therefore makes the laser light spotty and diffuse. Those troublesome modes are why the brightness of conventional semiconductor lasers maxes out around 100 MW/cm2/sr. PCSELs deal with unwanted modes by adding another layer inside the sandwich: the “Swiss cheese” layer. This special extra layer is a semiconductor sheet stamped with a two-dimensional array of nanoscale holes. By tuning the spacing and shape of the holes, we can control the propagation of light inside the laser so that it oscillates in only the fundamental mode, even when the emission area is expanded. The result is a beam that can be both powerful and narrow—that is, bright. Because of their internal physics, PCSELs operate in a completely different way from edge-emitting lasers. Instead of pointing away from you, for instance, the beam from your PCSEL sandwich would now radiate upward, through the top slice of bread. To explain this unusual emission, and why PCSELs can be orders of magnitude brighter than other semiconductor lasers, we must first describe the material properties of the Swiss cheese—in actuality, a fascinating structure called a photonic crystal. How Photonic Crystals Work Photonic crystals control the flow of light in a way that’s similar to how semiconductors control the flow of electrons. Instead of atoms, however, the lattice of a photonic crystal is sculpted out of larger entities—such as holes, cubes, or columns—arranged such that the refractive index changes periodically on the scale of a wavelength of light. Although the quest to artificially construct these marvelous materials began less than 40 years ago, scientists have since learned that they already exist in nature. Opals, peacock feathers, and some butterfly wings, for example, all owe their brilliant iridescence to the intricate play of light within naturally engineered photonic crystals. Understanding how light moves in a photonic crystal is fundamental to PCSEL design. We can predict this behavior by studying the crystal’s photonic band structure, which is analogous to the electronic band structure of a semiconductor. One way to do that is to plot the relationship between frequency and wavenumber—the number of wave cycles that fit within one unit cell of the crystal’s lattice. How Light Moves in a Photonic Crystal Consider, for example, a simple one-dimensional photonic crystal formed by alternating ribbons of glass and air. Light entering the crystal will refract through and partially reflect off each interface, producing overlapping beams that reinforce or weaken one another according to the light’s wavelength and direction. Most waves will travel through the material. But at certain points, called singularity points, the reflections combine perfectly with the incident wave to form a standing wave, which does not propagate. In this case, a singularity occurs when a wave undergoes exactly half a cycle from one air ribbon to the next. There are other singularities wherever a unit cell is an integer multiple of half the wavelength. One of us (Susumu Noda) began experimenting with lasers containing photonic crystal-like structures before these materials even had a name. In the mid 1980s, while at Mitsubishi Electric Corporation, he studied a semiconductor laser called a distributed feedback (DFB) laser. A DFB laser is a basic stripe laser with an extra internal layer containing regularly spaced grooves filled with matter of a slightly different refractive index. This periodic structure behaves somewhat like the 1D photonic crystal described above: It repeatedly reflects light at a single wavelength, as determined by the groove spacing, such that a standing wave emerges. Consequently, the laser oscillates at only that wavelength, which is critical for long-haul fiber-optic transmission and high-sensitivity optical sensing. Steel Slicer As the Mitsubishi team demonstrated, a DFB laser can be enticed to perform other tricks. For instance, when the team set the groove spacing equal to the lasing wavelength in the device, some of the oscillating light diffracted upward, causing the laser to shine not only from the tiny front edge of its active stripe but also from the stripe’s top. However, this surface beam fanned wildly due to the narrow width of the stripe, which also made it difficult to increase the output power. To Noda’s disappointment, his team’s attempts to widen the stripe—and therefore increase brightness—without causing other headaches were unsuccessful. Nevertheless, those early failures planted an intriguing idea: What if laser light could be controlled in two dimensions instead of one? Boosting Brightness Later, at Kyoto University, Noda led research into 2D and 3D photonic crystals just as the field was coming into being. In 1998, his team built the first PCSEL, and we have since honed the design for various functionalities, including high brightness. In a basic PCSEL, the photonic-crystal layer is a 2D square lattice: Each unit cell is a square delineated by four holes. Although the band structure of a 2D photonic crystal is more complicated than that of a 1D crystal, it likewise reveals singularities where we expect standing waves to form. For our devices, we have made use of the singularity that occurs when the distance between neighboring holes is one wavelength. A gallium arsenide laser operating at 940 nanometers, for example, has an internal wavelength of around 280 nm (considering refractive index and temperature). So the holes in a basic gallium arsenide PCSEL would be set about 280 nm apart. The operating principle is this: When waves of that length are generated in the active layer, the holes in the neighboring photonic-crystal layer act like tiny mirrors, bending the light both backward and sideways. The combined effect of multiple such diffractions creates a 2D standing wave, which is then amplified by the active layer. Some of this oscillating light also diffracts upward and downward and leaks out the laser’s top, producing a surface beam of a single wavelength. A key reason this design works is the large refractive index contrast between the semiconductor and the air inside the holes. As Noda discovered while creating the first device, PCSELs with low refractive index contrasts, like those of DFB lasers, do not oscillate coherently. Also unlike a DFB laser, a PCSEL’s surface emission area is broad and usually round. It can therefore produce a higher quality beam with much lower divergence. Bigger and Brighter  As PCSEL size grows to accommodate more optical power, more lateral modes begin to oscillate. Here’s how those modes are eliminated in each device generation. Higher-order lateral modes form when a standing wave has multiple average peaks of intensity. When the emission area of the PCSEL is relatively small, the peaks sit near its edge. Consequently, most of the light leaks out of the sides, and so the higher-order modes do not oscillate. The double lattice causes light diffracting through the crystal to interfere destructively. These cancellations weaken and spread the intensity peaks of the standing waves, causing the higher-order modes to leak heavily again. However, this method alone does not sufficiently suppress those modes in larger devices. Adjustments to the holes and the bottom reflector induce light exiting the laser to lose some of its energy through interference with the standing waves. Because higher-order modes lose more light, they can be selectively cut off. In 2014, our group reported that a PCSEL with a square lattice of triangular holes and an emission area of 200 by 200 μm could operate continuously at around 1 watt while maintaining a spotlike beam that diverged only about 2 degrees. Compared with conventional semiconductor lasers, whose beams typically diverge more than 30 degrees, this performance was remarkable. The next step was to boost optical power, for which we needed a larger device. But here we hit a snag. According to our theoretical models, PCSELs using the single-lattice design could not grow larger than about 200 μm without inviting pesky higher-order lateral modes. In a PCSEL, multiple modes form when the intensity of a standing wave can be distributed in multiple ways due to the interference pattern created by repeated diffractions. In the fundamental (read: desirable) mode, the intensity distribution resembles Mount Fuji, with most of the oscillating light concentrated in the center of the lattice. Each higher-order mode, meanwhile, has two, three, four, or more Mount Fujis. So when the laser’s emission area is relatively small, the intensity peaks of the higher-order modes sit near the lattice’s periphery. Most of their light therefore leaks out of the sides of the device, preventing these modes from oscillating and contributing to the laser beam. But as with conventional lasers, enlarging the emission area makes space for more modes to oscillate. To solve that problem, we added another set of holes to the photonic-crystal layer, creating a double lattice. In our most successful version, a square lattice of circular holes is shifted a quarter wavelength from a second square lattice of elliptical holes. As a result, some of the diffracting light inside the crystal interferes destructively. These cancellations cause the intensity peaks of the lateral modes to weaken and spread. So when we expand the laser’s emission area, light from the higher-order modes still leaks heavily and does not oscillate. Using that approach, we fabricated a PCSEL with a round emission area 1 millimeter in diameter and showed it could produce a 10-W beam under continuous operation. Diverging just one-tenth of a degree, the beam was even slenderer and more collimated than its 200-μm predecessor and more than three times as bright as is possible with a conventional semiconductor laser. Our device also had the advantage of oscillating in a single mode, of course, which conventional lasers of comparable size cannot do. Pushing PCSEL brightness higher required further innovation. At larger diameters, the double-lattice approach alone does not sufficiently suppress higher-order modes, and so they oscillate yet again. We had observed, however, that these modes depart the laser slightly askew, which drew our attention to the backside reflector. (Picture a sheet of tinfoil lining the bottom of your ham and Swiss sandwich.) This 50-watt PCSEL is bright enough to slice through steel. Susumu Noda In previous device generations, this reflector had served simply to bounce downward-diffracted light up and out from the laser’s emitting surface. By adjusting its position (as well as the spacing and shape of the photonic-crystal holes), we found we could control the reflections so that they interfere in a useful way with the 2D standing waves oscillating within the photonic-crystal layer. This interference, or coupling, essentially induces the departing waves to lose some of their energy. The more askew a departing wave, the more light is lost. And poof! No more higher-order modes. That is how, in 2023, we developed a PCSEL whose brightness of 1 GW/cm2/sr rivals that of gas and fiber lasers. With a 3-mm emission diameter, it could lase continuously at up to 50 W while sustaining a beam that diverged a minuscule one-twentieth of a degree. We even used it to cut through steel. As the bright, beautiful beam carved a disc out of a metal plate 100 μm thick, our entire lab huddled around, watching in amazement. More Powerful PCSELs As impressive as the steel-slicing demonstration was, PCSELs must be even more powerful to compete in the industrial marketplace. Manufacturing automobile parts, for instance, requires optical powers on the order of kilowatts. It should be fairly straightforward to build a PCSEL that can handle that kind of power—either by assembling an array of nine 3-mm PCSELs or by expanding the emission area of our current device to 1 cm. At that size, higher-order modes would once again emerge, reducing the beam quality. But because they would still be as bright as high-power gas and fiber lasers, such kilowatt-class PCSELs could begin to usurp their bulkier competitors. To be truly game-changing, 1-cm PCSELs would need to level up by suppressing those higher-order modes. We have already devised a way to do that by fine-tuning the photonic-crystal structure and the position of the reflector. Although we have not yet tested this new recipe in the lab, our theoretical models suggest that it could raise PCSEL brightness as high as 10 to 100 GW/cm2/sr. Just imagine the variety of unique and intricate products that could be made when such concentrated light can be wielded from a tiny package. Especially for those high-power applications, we’ll need to improve the laser’s energy efficiency and thermal management. Even without any optimization, the “wall plug” efficiency of PCSELs is already at 30 to 40 percent, exceeding most carbon-dioxide and fiber lasers. What’s more, we’ve found a path we think could lead to 60 percent efficiency. And as for thermal management, the water-cooling technology we’re using in the lab today should be sufficient for a 1,000-W, 1-cm PCSEL. High-brightness PCSELs could also be used to make smaller and more affordable sensor systems for self-driving cars and robots. Recently, we built a lidar system using a 500-μm PCSEL. Under pulsed operation, we ran it at about 20 W and got a terrifically bright beam. Even at 30 meters, the spot size was only 5 cm. Such high resolution is unheard of for a compact lidar system without external lenses. We then mounted our prototypes—which are roughly the size of a webcam—on robotic carts and programmed them to follow us and one another around the engineering building. In a separate line of work, we have shown that PCSELs can emit multiple beams that can be controlled electronically to point in different directions. This on-chip beam steering is achieved by varying the position and size of the holes in the photonic-crystal layer. Ultimately, it could replace mechanical beam steering in lidar systems. If light detectors were also integrated on the same chip, these all-electronic navigation systems would be seriously miniature and low-cost. Although it will be challenging, we eventually hope to make 3-cm lasers with output powers exceeding 10 kilowatts and beams shining up to 1,000 GW/cm2/sr—brighter than any laser that exists today. At such extreme brightness, PCSELs could replace the huge, electricity-hungry CO2 lasers used to generate plasma pulses for extreme ultraviolet lithography machines, making chip manufacturing much more efficient. They could similarly advance efforts to realize nuclear fusion, a process that involves firing trillions of watts of laser power at a pea-size fuel capsule. Exceptionally bright lasers also raise the possibility of light propulsion for spaceflight. Instead of taking thousands of years to reach faraway stars, a probe boosted by light could make the journey in only a few decades. It may be a cliché, but we cannot think of a more apt prediction for the next chapter of human ingenuity: The future, as they say, is bright. This article appears in the May 2024 print issue as “The Brightest Semiconductor Laser Ever.”

  • Getting the Grid to Net Zero
    by Benjamin Kroposki on 13. April 2024. at 19:00

    It’s late in the afternoon of 2 April 2023 on the island of Kauai. The sun is sinking over this beautiful and peaceful place, when, suddenly, at 4:25 pm, there’s a glitch: The largest generator on the island, a 26-megawatt oil-fired turbine, goes offline. This is a more urgent problem than it might sound. The westernmost Hawaiian island of significant size, Kauai is home to around 70,000 residents and 30,000 tourists at any given time. Renewable energy accounts for 70 percent of the energy produced in a typical year—a proportion that’s among the highest in the world and that can be hard to sustain for such a small and isolated grid. During the day, the local system operator, the Kauai Island Utility Cooperative, sometimes reaches levels of 90 percent from solar alone. But on 2 April, the 26-MW generator was running near its peak output, to compensate for the drop in solar output as the sun set. At the moment when it failed, that single generator had been supplying 60 percent of the load for the entire island, with the rest being met by a mix of smaller generators and several utility-scale solar-and-battery systems. Normally, such a sudden loss would spell disaster for a small, islanded grid. But the Kauai grid has a feature that many larger grids lack: a technology called grid-forming inverters. An inverter converts direct-current electricity to grid-compatible alternating current. The island’s grid-forming inverters are connected to those battery systems, and they are a special type—in fact, they had been installed with just such a contingency in mind. They improve the grid’s resilience and allow it to operate largely on resources like batteries, solar photovoltaics, and wind turbines, all of which connect to the grid through inverters. On that April day in 2023, Kauai had over 150 megawatt-hours’ worth of energy stored in batteries—and also the grid-forming inverters necessary to let those batteries respond rapidly and provide stable power to the grid. They worked exactly as intended and kept the grid going without any blackouts. The photovoltaic panels at the Kapaia solar-plus-storage facility, operated by the Kauai Island Utility Cooperative in Hawaii, are capable of generating 13 megawatts under ideal conditions.TESLA A solar-plus-storage facility at the U.S. Navy’s Pacific Missile Range Facility, in the southwestern part of Kauai, is one of two on the island equipped with grid-forming inverters. U.S. NAVY That April event in Kauai offers a preview of the electrical future, especially for places where utilities are now, or soon will be, relying heavily on solar photovoltaic or wind power. Similar inverters have operated for years within smaller off-grid installations. However, using them in a multimegawatt power grid, such as Kauai’s, is a relatively new idea. And it’s catching on fast: At the time of this writing, at least eight major grid-forming projects are either under construction or in operation in Australia, along with others in Asia, Europe, North America, and the Middle East. Reaching net-zero-carbon emissions by 2050, as many international organizations now insist is necessary to stave off dire climate consequences, will require a rapid and massive shift in electricity-generating infrastructures. The International Energy Agency has calculated that to have any hope of achieving this goal would require the addition, every year, of 630 gigawatts of solar photovoltaics and 390 GW of wind starting no later than 2030—figures that are around four times as great as than any annual tally so far. The only economical way to integrate such high levels of renewable energy into our grids is with grid-forming inverters, which can be implemented on any technology that uses an inverter, including wind, solar photovoltaics, batteries, fuel cells, microturbines, and even high-voltage direct-current transmission lines. Grid-forming inverters for utility-scale batteries are available today from Tesla, GPTech, SMA, GE Vernova, EPC Power, Dynapower, Hitachi, Enphase, CE+T, and others. Grid-forming converters for HVDC, which convert high-voltage direct current to alternating current and vice versa, are also commercially available, from companies including Hitachi, Siemens, and GE Vernova. For photovoltaics and wind, grid-forming inverters are not yet commercially available at the size and scale needed for large grids, but they are now being developed by GE Vernova, Enphase, and Solectria. The Grid Depends on Inertia To understand the promise of grid-forming inverters, you must first grasp how our present electrical grid functions, and why it’s inadequate for a future dominated by renewable resources such as solar and wind power. Conventional power plants that run on natural gas, coal, nuclear fuel, or hydropower produce electricity with synchronous generators—large rotating machines that produce AC electricity at a specified frequency and voltage. These generators have some physical characteristics that make them ideal for operating power grids. Among other things, they have a natural tendency to synchronize with one another, which helps make it possible to restart a grid that’s completely blacked out. Most important, a generator has a large rotating mass, namely its rotor. When a synchronous generator is spinning, its rotor, which can weigh well over 100 tonnes, cannot stop quickly. The Kauai electric transmission grid operates at 57.1 kilovolts, an unusual voltage that is a legacy from the island’s sugar-plantation era. The network has grid-forming inverters at the Pacific Missile Range Facility, in the southwest, and at Kapaia, in the southeast. CHRIS PHILPOT This characteristic gives rise to a property called system inertia. It arises naturally from those large generators running in synchrony with one another. Over many years, engineers used the inertia characteristics of the grid to determine how fast a power grid will change its frequency when a failure occurs, and then developed mitigation procedures based on that information. If one or more big generators disconnect from the grid, the sudden imbalance of load to generation creates torque that extracts rotational energy from the remaining synchronous machines, slowing them down and thereby reducing the grid frequency—the frequency is electromechanically linked to the rotational speed of the generators feeding the grid. Fortunately, the kinetic energy stored in all that rotating mass slows this frequency drop and typically allows the remaining generators enough time to ramp up their power output to meet the additional load. Electricity grids are designed so that even if the network loses its largest generator, running at full output, the other generators can pick up the additional load and the frequency nadir never falls below a specific threshold. In the United States, where nominal grid frequency is 60 hertz, the threshold is generally between 59.3 and 59.5 Hz. As long as the frequency remains above this point, local blackouts are unlikely to occur. Why We Need Grid-Forming Inverters Wind turbines, photovoltaics, and battery-storage systems differ from conventional generators because they all produce direct current (DC) electricity—they don’t have a heartbeat like alternating current does. With the exception of wind turbines, these are not rotating machines. And most modern wind turbines aren’t synchronously rotating machines from a grid standpoint—the frequency of their AC output depends on the wind speed. So that variable-frequency AC is rectified to DC before being converted to an AC waveform that matches the grid’s. As mentioned, inverters convert the DC electricity to grid-compatible AC. A conventional, or grid-following, inverter uses power transistors that repeatedly and rapidly switch the polarity applied to a load. By switching at high speed, under software control, the inverter produces a high-frequency AC signal that is filtered by capacitors and other components to produce a smooth AC current output. So in this scheme, the software shapes the output waveform. In contrast, with synchronous generators the output waveform is determined by the physical and electrical characteristics of the generator. Grid-following inverters operate only if they can “see” an existing voltage and frequency on the grid that they can synchronize to. They rely on controls that sense the frequency of the voltage waveform and lock onto that signal, usually by means of a technology called a phase-locked loop. So if the grid goes down, these inverters will stop injecting power because there is no voltage to follow. A key point here is that grid-following inverters do not deliver any inertia. Przemyslaw Koralewicz, David Corbus, Shahil Shah, and Robb Wallen, researchers at the National Renewable Energy Laboratory, evaluate a grid-forming inverter used on Kauai at the NREL Flatirons Campus. DENNIS SCHROEDER/NREL Grid-following inverters work fine when inverter-based power sources are relatively scarce. But as the levels of inverter-based resources rise above 60 to 70 percent, things start to get challenging. That’s why system operators around the world are beginning to put the brakes on renewable deployment and curtailing the operation of existing renewable plants. For example, the Electric Reliability Council of Texas (ERCOT) regularly curtails the use of renewables in that state because of stability issues arising from too many grid-following inverters. It doesn’t have to be this way. When the level of inverter-based power sources on a grid is high, the inverters themselves could support grid-frequency stability. And when the level is very high, they could form the voltage and frequency of the grid. In other words, they could collectively set the pulse, rather than follow it. That’s what grid-forming inverters do. The Difference Between Grid Forming and Grid Following Grid-forming (GFM) and grid-following (GFL) inverters share several key characteristics. Both can inject current into the grid during a disturbance. Also, both types of inverters can support the voltage on a grid by controlling their reactive power, which is the product of the voltage and the current that are out of phase with each other. Both kinds of inverters can also help prop up the frequency on the grid, by controlling their active power, which is the product of the voltage and current that are in phase with each other. What makes grid-forming inverters different from grid-following inverters is mainly software. GFM inverters are controlled by code designed to maintain a stable output voltage waveform, but they also allow the magnitude and phase of that waveform to change over time. What does that mean in practice? The unifying characteristic of all GFM inverters is that they hold a constant voltage magnitude and frequency on short timescales—for example, a few dozen milliseconds—while allowing that waveform’s magnitude and frequency to change over several seconds to synchronize with other nearby sources, such as traditional generators and other GFM inverters. Some GFM inverters, called virtual synchronous machines, achieve this response by mimicking the physical and electrical characteristics of a synchronous generator, using control equations that describe how it operates. Other GFM inverters are programmed to simply hold a constant target voltage and frequency, allowing that target voltage and frequency to change slowly over time to synchronize with the rest of the power grid following what is called a droop curve. A droop curve is a formula used by grid operators to indicate how a generator should respond to a deviation from nominal voltage or frequency on its grid. There are many variations of these two basic GFM control methods, and other methods have been proposed as well. At least eight major grid-forming projects are either under construction or in operation in Australia, along with others in Asia, Europe, North America, and the Middle East. To better understand this concept, imagine that a transmission line shorts to ground or a generator trips due to a lightning strike. (Such problems typically occur multiple times a week, even on the best-run grids.) The key advantage of a GFM inverter in such a situation is that it does not need to quickly sense frequency and voltage decline on the grid to respond. Instead, a GFM inverter just holds its own voltage and frequency relatively constant by injecting whatever current is needed to achieve that, subject to its physical limits. In other words, a GFM inverter is programmed to act like an AC voltage source behind some small impedance (impedance is the opposition to AC current arising from resistance, capacitance, and inductance). In response to an abrupt drop in grid voltage, its digital controller increases current output by allowing more current to pass through its power transistors, without even needing to measure the change it’s responding to. In response to falling grid frequency, the controller increases power. GFL controls, on the other hand, need to first measure the change in voltage or frequency, and then take an appropriate control action before adjusting their output current to mitigate the change. This GFL strategy works if the response does not need to be superfast (as in microseconds). But as the grid becomes weaker (meaning there are fewer voltage sources nearby), GFL controls tend to become unstable. That’s because by the time they measure the voltage and adjust their output, the voltage has already changed significantly, and fast injection of current at that point can potentially lead to a dangerous positive feedback loop. Adding more GFL inverters also tends to reduce stability because it becomes more difficult for the remaining voltage sources to stabilize them all. When a GFM inverter responds with a surge in current, it must do so within tightly prescribed limits. It must inject enough current to provide some stability but not enough to damage the power transistors that control the current flow. Increasing the maximum current flow is possible, but it requires increasing the capacity of the power transistors and other components, which can significantly increase cost. So most inverters (both GFM and GFL) don’t provide current surges larger than about 10 to 30 percent above their rated steady-state current. For comparison, a synchronous generator can inject around 500 to 700 percent more than its rated current for several AC line cycles (around a tenth of a second, say) without sustaining any damage. For a large generator, this can amount to thousands of amperes. Because of this difference between inverters and synchronous generators, the protection technologies used in power grids will need to be adjusted to account for lower levels of fault current. What the Kauai Episode Reveals The 2 April event on Kauai offered an unusual opportunity to study the performance of GFM inverters during a disturbance. After the event, one of us (Andy Hoke) along with Jin Tan and Shuan Dong and some coworkers at the National Renewable Energy Laboratory, collaborated with the Kauai Island Utility Cooperative (KIUC) to get a clear understanding of how the remaining system generators and inverter-based resources interacted with each other during the disturbance. What we determined will help power grids of the future operate at levels of inverter-based resources up to 100 percent. NREL researchers started by creating a model of the Kauai grid. We then used a technique called electromagnetic transient (EMT) simulation, which yields information on the AC waveforms on a sub-millisecond basis. In addition, we conducted hardware tests at NREL’s Flatirons Campus on a scaled-down replica of one of Kauai’s solar-battery plants, to evaluate the grid-forming control algorithms for inverters deployed on the island.The leap from power systems like Kauai’s, with a peak demand of roughly 80 MW, to ones like South Australia’s, at 3,000 MW, is a big one. But it’s nothing compared to what will come next: grids with peak demands of 85,000 MW (in Texas) and 742,000 MW (the rest of the continental United States). Several challenges need to be solved before we can attempt such leaps. They include creating standard GFM specifications so that inverter vendors can create products. We also need accurate models that can be used to simulate the performance of GFM inverters, so we can understand their impact on the grid. Some progress in standardization is already happening. In the United States, for example, the North American Electric Reliability Corporation (NERC) recently published a recommendation that all future large-scale battery-storage systems have grid-forming capability. Standards for GFM performance and validation are also starting to emerge in some countries, including Australia, Finland, and Great Britain. In the United States, the Department of Energy recently backed a consortium to tackle building and integrating inverter-based resources into power grids. Led by the National Renewable Energy Laboratory, the University of Texas at Austin, and the Electric Power Research Institute, the Universal Interoperability for Grid-Forming Inverters (UNIFI) Consortium aims to address the fundamental challenges in integrating very high levels of inverter-based resources with synchronous generators in power grids. The consortium now has over 30 members from industry, academia, and research laboratories. A recording of the frequency responses to two different grid disruptions on Kauai shows the advantages of grid-forming inverters. The red trace shows the relatively contained response with two grid-forming inverter systems in operation. The blue trace shows the more extreme response to an earlier, comparable disruption, at a time when there was only one grid-forming plant online.NATIONAL RENEWABLE ENERGY LABORATORY At 4:25 pm on 2 April, there were two large GFM solar-battery plants, one large GFL solar-battery plant, one large oil-fired turbine, one small diesel plant, two small hydro plants, one small biomass plant, and a handful of other solar generators online. Immediately after the oil-fired turbine failed, the AC frequency dropped quickly from 60 Hz to just above 59 Hz during the first 3 seconds [red trace in the figure above]. As the frequency dropped, the two GFM-equipped plants quickly ramped up power, with one plant quadrupling its output and the other doubling its output in less than 1/20 of a second. In contrast, the remaining synchronous machines contributed some rapid but unsustained active power via their inertial responses, but took several seconds to produce sustained increases in their output. It is safe to say, and it has been confirmed through EMT simulation, that without the two GFM plants, the entire grid would have experienced a blackout. Coincidentally, an almost identical generator failure had occurred a couple of years earlier, on 21 November 2021. In this case, only one solar-battery plant had grid-forming inverters. As in the 2023 event, the three large solar-battery plants quickly ramped up power and prevented a blackout. However, the frequency and voltage throughout the grid began to oscillate around 20 times per second [the blue trace in the figure above], indicating a major grid stability problem and causing some customers to be automatically disconnected. NREL’s EMT simulations, hardware tests, and controls analysis all confirmed that the severe oscillation was due to a combination of grid-following inverters tuned for extremely fast response and a lack of sufficient grid strength to support those GFL inverters. In other words, the 2021 event illustrates how too many conventional GFL inverters can erode stability. Comparing the two events demonstrates the value of GFM inverter controls—not just to provide fast yet stable responses to grid events but also to stabilize nearby GFL inverters and allow the entire grid to maintain operations without a blackout. Australia Commissions Big GFM Projects In sunny South Australia, solar power now routinely supplies all or nearly all of the power needed during the middle of the day. Shown here is the chart for 31 December 2023, in which solar supplied slightly more power than the state needed at around 1:30 p.m. AUSTRALIAN ENERGY MARKET OPERATOR (AEMO) The next step for inverter-dominated power grids is to go big. Some of the most important deployments are in South Australia. As in Kauai, the South Australian grid now has such high levels of solar generation that it regularly experiences days in which the solar generation can exceed the peak demand during the middle of the day [see figure at left]. The most well-known of the GFM resources in Australia is the Hornsdale Power Reserve in South Australia. This 150-MW/194-MWh system, which uses Tesla’s Powerpack 2 lithium-ion batteries, was originally installed in 2017 and was upgraded to grid-forming capability in 2020. Australia’s largest battery (500 MW/1,000 MWh) with grid-forming inverters is expected to start operating in Liddell, New South Wales, later this year. This battery, from AGL Energy, will be located at the site of a decommissioned coal plant. This and several other larger GFM systems are expected to start working on the South Australia grid over the next year. The leap from power systems like Kauai’s, with a peak demand of roughly 80 MW, to ones like South Australia’s, at 3,000 MW, is a big one. But it’s nothing compared to what will come next: grids with peak demands of 85,000 MW (in Texas) and 742,000 MW (the rest of the continental United States). Several challenges need to be solved before we can attempt such leaps. They include creating standard GFM specifications so that inverter vendors can create products. We also need accurate models that can be used to simulate the performance of GFM inverters, so we can understand their impact on the grid. Some progress in standardization is already happening. In the United States, for example, the North American Electric Reliability Corporation (NERC) recently published a recommendation that all future large-scale battery-storage systems have grid-forming capability. Standards for GFM performance and validation are also starting to emerge in some countries, including Australia, Finland, and Great Britain. In the United States, the Department of Energy recently backed a consortium to tackle building and integrating inverter-based resources into power grids. Led by the National Renewable Energy Laboratory, the University of Texas at Austin, and the Electric Power Research Institute, the Universal Interoperability for Grid-Forming Inverters (UNIFI) Consortium aims to address the fundamental challenges in integrating very high levels of inverter-based resources with synchronous generators in power grids. The consortium now has over 30 members from industry, academia, and research laboratories. One of Australia’s major energy-storage facilities is the Hornsdale Power Reserve, at 150 megawatts and 194 megawatt-hours. Hornsdale, along with another facility called the Riverina Battery, are the country’s two largest grid-forming installations. NEOEN In addition to specifications, we need computer models of GFM inverters to verify their performance in large-scale systems. Without such verification, grid operators won’t trust the performance of new GFM technologies. Using GFM models built by the UNIFI Consortium, system operators and utilities such as the Western Electricity Coordinating Council, American Electric Power, and ERCOT (the Texas’s grid-reliability organization) are conducting studies to understand how GFM technology can help their grids. Getting to a Greener Grid As we progress toward a future grid dominated by inverter-based generation, a question naturally arises: Will all inverters need to be grid-forming? No. Several studies and simulations have indicated that we’ll need just enough GFM inverters to strengthen each area of the grid so that nearby GFL inverters remain stable. How many GFMs is that? The answer depends on the characteristics of the grid and other generators. Some initial studies have shown that a power system can operate with 100 percent inverter-based resources if around 30 percent are grid-forming. More research is needed to understand how that number depends on details such as the grid topology and the control details of both the GFLs and the GFMs. Ultimately, though, electricity generation that is completely carbon free in its operation is within our grasp. Our challenge now is to make the leap from small to large to very large systems. We know what we have to do, and it will not require technologies that are far more advanced than what we already have. It will take testing, validation in real-world scenarios, and standardization so that synchronous generators and inverters can unify their operations to create a reliable and robust power grid. Manufacturers, utilities, and regulators will have to work together to make this happen rapidly and smoothly. Only then can we begin the next stage of the grid’s evolution, to large-scale systems that are truly carbon neutral. This article appears in the May 2024 print issue as “A Path to 100 Percent Renewable Energy.”

  • Video Friday: Robot Dog Can’t Fall
    by Evan Ackerman on 12. April 2024. at 15:11

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboCup German Open: 17–21 April 2024, KASSEL, GERMANY AUVSI XPONENTIAL 2024: 22–25 April 2024, SAN DIEGO Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE ICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS Cybathlon 2024: 25–27 October 2024, ZURICH Enjoy today’s videos! I think suggesting that robots can’t fall is much less useful than instead suggesting that robots can fall and get quickly and easily get back up again. [ Deep Robotics ] Sanctuary AI says that this video shows Phoenix operating at “human-equivalent speed,” but they don’t specify which human or under which conditions. Though it’s faster than I would be, that’s for sure. [ Sanctuary AI ] “Suzume” is an animated film by Makoto Shinkai, in which one of the characters gets turned into a three-legged chair: Shintaro Inoue from JSK Lab at the University of Tokyo has managed to build a robotic version of that same chair, which is pretty impressive: [ Github ] Thanks, Shintaro! Humanoid robot EVE training for home assistance like putting groceries into the kitchen cabinets. [ 1X ] This is the RAM—robotic autonomous mower. It can be dropped anywhere in the world and will wake up with a mission to make tall grass around it shorter. Here is a quick clip of it working on the Presidio in SF. [ Electric Sheep ] This year, our robots braved a Finnish winter for the first time. As the snow clears and the days get longer, we’re looking back on how our robots made thousands of deliveries to S Group customers during the colder months. [ Starship ] Agility Robotics is doing its best to answer the (very common) question of “Okay, but what can humanoid robots actually do?” [ Agility Robotics ] Digit is great and everything, but Cassie will always be one of my favorite robots. [ CoRIS ] Adopting omnidirectional Field of View (FoV) cameras in aerial robots vastly improves perception ability, significantly advancing aerial robotics’s capabilities in inspection, reconstruction, and rescue tasks. We propose OmniNxt, a fully open-source aerial robotics platform with omnidirectional perception. [ OmniNxt ] The MAkEable framework enhances mobile manipulation in settings designed around humans by streamlining the process of sharing learned skills and experiences among different robots and contexts. Practical tests confirm its efficiency in a range of scenarios, involving different robots, in tasks such as object grasping, coordinated use of both hands in tasks, and the exchange of skills among humanoid robots. [ Paper ] We conducted trials of Ringbot outdoors on a 400 meter track. With a power source of 2300 milliamp-hours and 11.1 Volts, Ringbot managed to cover approximately 3 kilometers in 37 minutes. We commanded its target speed and direction using a remote joystick controller (Steam Deck), and Ringbot experienced five falls during this trial. [ Paper ] There is a notable lack of consistency about where exactly Boston Dynamics wants you to think Spot’s eyes are. [ Boston Dynamics ] As with every single cooking video, there’s a lot of background prep that’s required for this robot to cook an entire meal, but I would utterly demolish those fries. [ Dino Robotics ] Here’s everything you need to know about Wing delivery drones, except for how much human time they actually require and the true cost of making deliveries by drone, because those things aren’t fun to talk about. [ Wing ] This CMU Teruko Yata Memorial Lecture is by Agility Robotics’ Jonathan Hurst, on “Human-Centric Robots and How Learning Enables Generality.” Humans have dreamt of robot helpers forever. What’s new is that this dream is becoming real. New developments in AI, building on foundations of hardware and passive dynamics, enable vastly improved generality. Robots can step out of highly structured environments and become more human-centric: operating in human spaces, interacting with people, and doing some basic human workflows. By connecting a Large Language Model, Digit can convert natural language high-level requests into complex robot instructions, composing the library of skills together, using human context to achieve real work in the human world. All of this is new—and it is never going back: AI will drive a fast-following robot revolution that is going to change the way we live. [ CMU ]

  • Pogo Stick Microcopter Bounces off Floors and Walls
    by Evan Ackerman on 12. April 2024. at 13:30

    We tend to think about hopping robots from the ground up. That is, they start on the ground, and then, by hopping, incorporate a aerial phase into their locomotion. But there’s no reason why aerial robots can’t approach hopping from the other direction, by adding a hopping ground phase to flight. Hopcopter is the first robot that I’ve ever seen give this a try, and it’s remarkably effective, combining a tiny quadrotor with a springy leg to hop hop hop all over the place. Songnan Bai, Runze Ding, Song Li, and Bingxuan Pu So why in the air is it worth adding a pogo stick to an otherwise perfectly functional quadrotor? Well, flying is certainly a valuable ability to have, but does take a lot of energy. If you pay close attention to birds (acknowledged experts in the space), they tend to spend a substantial amount of time doing their level best not to fly, often by walking on the ground or jumping around in trees. Not flying most of the time is arguably one of the things that makes birds so successful—it’s that multimodal locomotion capability that has helped them to adapt to so many different environments and situations. Hopcopter is multimodal as well, although in a slightly more restrictive sense: Its two modes are flying and intermittent flying. But the intermittent flying is very important, because cutting down on that flight phase gives Hopcopter some of the same efficiency benefits that birds experience. By itself, a quadrotor of hopcopter’s size can stay airborne for about 400 seconds, while Hopcopter can hop continuously for more than 20 minutes. If your objective is to cover as much distance as possible, Hopcopter might not be as effective as a legless quadrotor. But if your objective is instead something like inspection or search and rescue, where you need to spend a fair amount of time not moving very much, hopping could be significantly more effective. Hopcopter is a small quadcopter (specifically a Crazyflie) attached to a springy pogo-stick leg.Songnan Bai, Runze Ding, Song Li, and Bingxuan Pu Hopcopter can reposition itself on the fly to hop off of different surfaces.Songnan Bai, Runze Ding, Song Li, and Bingxuan Pu The actual hopping is mostly passive. Hopcopter’s leg is two rigid pieces connected by rubber bands, with a Crazyflie microcopter stapled to the top. During a hop, the Crazyflie can add directional thrust to keep the hops hopping and alter its direction as well as its height, from 0.6 meters to 1.6 meters. There isn’t a lot of room for extra sensors on Hopcopter, but the addition of some stabilizing fins allow for continuous hopping without any positional feedback. Besides vertical hopping, Hopcopter can also position itself in midair to hop off of surfaces at other orientations, allowing it to almost instantaneously change direction, which is a neat trick. And it can even do mid air somersaults, because why not? Hopcopter’s repertoire of tricks includes somersaults.Songnan Bai, Runze Ding, Song Li, and Bingxuan Pu The researchers, based at the City University of Hong Kong, say that the Hopcopter technology (namely, the elastic leg) could be easily applied to most other quadcopter platforms, turning them into Hopcopters as well. And if you’re more interested in extra payload rather than extra endurance, it’s possible to use hopping in situations where a payload would be too heavy for continuous flight. The researchers published their work 10 April in Science Robotics.

  • Caltech’s SSPD-1 Is a New Idea for Space-Based Solar
    by W. Wayt Gibbs on 11. April 2024. at 21:29

    The idea of powering civilization from gigantic solar plants in orbit is older than any space program, but despite seven decades of rocket science, the concept—to gather near-constant sunlight tens of thousands of kilometers above the equator, beam it to Earth as microwaves, and convert it to electricity—still remains tantalizingly over the horizon. Several recently published deep-dive analyses commissioned by NASA and the European Space Agency have thrown cold water on the hope that space solar power could affordably generate many gigawatts of clean energy in the near future. And yet the dream lives on. The dream achieved a kind of lift-off in January 2023. That’s when SSPD-1, a solar space-power demonstrator satellite carrying a bevy of new technologies designed at the California Institute of Technology, blasted into low Earth orbit for a year-long mission. Mindful of concerns about the technical feasibility of robotic in-space assembly of satellites, each an order of magnitude larger than the International Space Station, the Caltech team has been looking at very different approaches to space solar power. For an update on what the SSPD-1 mission achieved and how it will shape future concepts for space solar-power satellites, IEEE Spectrum spoke with Ali Hajimiri, an IEEE Fellow, professor of electrical engineering at Caltech, and codirector of the school’s space-based solar power project. The interview has been condensed and edited for length and clarity. SSPD-1 flew with several different testbeds. Let’s start with the MAPLE (Microwave Array for Power-transfer Low-orbit Experiment) testbed for wireless power transmission: When you and your team went up on the roof of your building on campus in May 2023 and aimed your antennas to where the satellite was passing over, did your equipment pick up actual power being beamed down or just a diagnostic signal? Ali Hajimiri is the codirector of Caltech’s space-based solar power project.Caltech Ali Hajimiri: I would call it a detection. The primary purpose of the MAPLE experiment was to demonstrate wireless energy transfer in space using flexible, lightweight structures and also standard CMOS integrated circuits. On one side are the antennas that transmit the power, and on the flip side are our custom CMOS chips that are part of the power-transfer electronics. The point of these things is to be very lightweight, to reduce the cost of launch into space, and to be very flexible for storage and deployment, because we want to wrap it and unwrap it like a sail. I see—wrap them up to fit inside a rocket and then unwrap and stretch them flat once they are released into orbit. Hajimiri: MAPLE’s primary objective was to demonstrate that these flimsy-looking arrays and CMOS integrated circuits can operate in space. And not only that, but that they can steer wireless energy transfer to different targets in space, different receivers. And by energy transfer I mean net power out at the receiver side. We did demonstrate power transfer in space, and we made a lot of measurements. We are writing up the details now and will publish those results. The second part of this experiment—really a stretch goal—was to demonstrate that ability to point the beam to the right place on Earth and see whether we picked up the expected power levels. Now, the larger the transmission array is in space, the greater the ability to focus the energy to a smaller spot on the ground. Right, because diffraction of the beam limits the size of the spot, as a function of the transmitter size and the frequency of the microwaves. Hajimiri: Yes. The array we had in space for MAPLE was very small. As a result, the transmitter spread the power over a very large area. So we captured a very small fraction of the energy—that’s why I call it a detection; it was not net positive power. But we measured it. We wanted to see: Do we get what we predict from our calculations? And we found it was in the right range of power levels we expected from an experiment like that. So, comparable in power to the signals that come down in standard communication satellite operations. Hajimiri: But done using this flexible, lightweight system—that’s what makes it better. You can imagine developing the next generation of communication satellites or space-based sensors being built with these to make the system significantly cheaper and lighter and easier to deploy. The satellites used now for Starlink and Kuiper—they work great, but they are bulky and heavy. With this technology for the next generation, you could deploy hundreds of them with a very small and much cheaper launch. It could lead to a much more effective Internet in the sky. Tell me about ALBA, the experiment on the mission that tested 32 different and novel kinds of photovoltaic solar cells to see how they perform in space. What were the key takeaways? Hajimiri: My Caltech colleague Harry Atwater led that experiment. What works best on Earth is not necessarily what works best in space. In space there is a lot of radiation damage, and they were able to measure degradation rates over months. On the other hand, there is no water vapor in space, no air oxidation, which is good for materials like perovskites that have problems with those things. So Harry and his team are exploring the trade-offs and developing a lot of new cells that are much cheaper and lighter: Cells made with thin films of perovskites or semiconductors like gallium arsenide, cells that use quantum dots, or use waveguides or other optics to concentrate the light. Many of these cells show very large promise. Very thin layers of gallium arsenide, in particular, seem very conducive to making cells that are lightweight but very high performance and much lower in cost because they need very little semiconductor material. Many of the design concepts for solar-power satellites, including one your group published in a 2022 preprint, incorporate concentrators to reduce the amount of photovoltaic area and mass needed. Hajimiri: A challenge with that design is the rather narrow acceptance angle: Things have to be aligned just right so that the focused sunlight hits the cell properly. That’s one of the reasons we’ve pulled away from that approach and moved toward a flat design. A view from inside MAPLE: On the right is the array of flexible microwave power transmitters, and on the left are receivers they transmit that power to.Caltech There are some other major differences between the Caltech power satellite design and the other concepts out there. For example, the other designs I’ve seen would use microwaves in the Wi-Fi range, between 2 and 6 gigahertz, because cheap components are available for those frequencies. But yours is at 10 GHz? Hajimiri: Exactly—and it’s a major advantage because when you double the frequency, the size of the systems in space and on the ground go down by a factor of four. We can do that basically because we build our own microchips and have a lot of capabilities in millimeter-wave circuit design. We’ve actually demonstrated some of these flexible panels that work at 28 GHz. And your design avoids the need for robots to do major assembly of components in space? Hajimiri: Our idea is to deploy a fleet of these sail-like structures that then all fly in close formation. They are not attached to each other. That translates to a major cost reduction. Each one of them has little thrusters on the edges, and it contains internal sensors that let it measure its own shape as it flies and then correct the phase of its transmission accordingly. Each would also track its own position relative to the neighbors and its angle to the sun. From your perspective as an electrical engineer, what are the really hard problems still to be solved? Hajimiri: Time synchronization between all parts of the transmitter array is incredibly crucial and one of the most interesting challenges for the future. Because the transmitter is a phased array, each of the million little antennas in the array has to synchronize precisely with the phase of its neighbors in order to steer the beam onto the receiver station on the ground. Hajimiri: Right. To give you a sense of the level of timing precision that we need across an array like this: We have to reduce phase noise and timing jitter to just a few picoseconds across the entire kilometer-wide transmitter. In the lab, we do that with wires of precise length or optical fibers that feed into CMOS chips with photodiodes built into them. We have some ideas about how to do that wirelessly, but we have no delusions: This is a long journey. What other challenges loom large? Hajimiri: The enormous scale of the system and the new manufacturing infrastructure needed to make it is very different from anything humanity has ever built. If I were to rank the challenges, I would put getting the will, resources, and mindshare behind a project of this magnitude as number one.

  • Marco Hutter Wants to Solve Robotics’ Hard Problems
    by Evan Ackerman on 11. April 2024. at 19:21

    Last December, the AI Institute announced that it was opening an office in Zurich as a European counterpart to its Boston headquarters and recruited Marco Hutter to helm the office. Hutter also runs the Robotic Systems Lab at ETH Zurich, arguably best known as the origin of the ANYmal quadruped robot (but it also does tons of other cool stuff). We’re doing our best to keep close tabs on the institute, because it’s one of a vanishingly small number of places that currently exist where roboticists have the kind of long-term resources and vision necessary to make substantial progress on really hard problems that aren’t quite right for either industry or academia. The institute is still scaling up (and the branch in Zurich has only just kicked things off), but we did spot some projects that the Boston folks have been working on, and as you can see from the clips at the top of this page, they’re looking pretty cool. Meanwhile, we had a chance to check in with Marco Hutter to get a sense of what the Zurich office will be working on and how he’s going to be solving all of the hard problems in robotics. All of them! How much can you tell us about what you’ll be working on at the AI Institute? Marco Hutter: If you know the research that I’ve been doing in the past at ETH and with our startups, there’s an overlap on making systems more mobile, making systems more able to interact with the world, making systems in general more capable on the hardware and software side. And that’s what the institute strives for. The institute describes itself as a research organization that aims to solve the most important and fundamental problems in robotics and AI. What do you think those problems are? Marco Hutter is the head of the AI Institute’s new Zurich branch.Swiss Robotics Day Hutter: There are lots of problems. If you’re looking at robots today, we have to admit that they’re still pretty stupid. The way they move, their capability of understanding their environment, the way they’re able to interact with unstructured environments—I think we’re still lacking a lot of skills on the robotic side to make robots useful in all of the tasks we wish them to do. So we have the ambition of having these robots taking over all these dull, dirty, and dangerous jobs. But if we’re honest, today the biggest impact is really only for the dull part. And I think these dirty and dangerous jobs, where we really need support from robots, that’s still going to take a lot of fundamental work on the robotics and AI side to make enough progress for robots to become useful tools. What is it about the institute that you think will help robotics make more progress in these areas? Hutter: I think the institute is one of these unique places where we are trying to bring the benefits of the academic world and the benefits from this corporate world together. In academia, we have all kinds of crazy ideas and we try to develop them in all different directions, but at the same time, we have limited engineering support, and we can only go so far. Making robust and reliable hardware systems is a massive effort, and that kind of engineering is much better done in a corporate lab. You’ve seen this a little bit with the type of work my lab has been doing in the past. We built simple quadrupeds with a little bit of mobility, but in order to make them robust, we eventually had to spin it out. We had to bring it to the corporate world, because for a research group, a pure academic group, it would have been impossible. But at the same time, you’re losing something, right? Once you go into your corporate world and you’re running a business, you have to be very focused; you can’t be that explorative and free anymore. So if you bring these two things together through the institute, with long-term planning, enough financial support, and brilliant people both in the U.S. and Europe working together, I think that’s what will hopefully help us make significant progress in the next couple of years. “We’re very different from a traditional company, where at some point you need to have a product that makes money. Here, it’s really about solving problems and taking the next step.” —Marco Hutter, AI Institute And what will that actually mean in the context of dynamically mobile robots? Hutter: If you look at Boston Dynamics’ Atlas doing parkour, or ANYmal doing parkour, these are still demonstrations. You don’t see robots running around in the forests or robots working in mines and doing all kinds of crazy maintenance operations, or in industrial facilities, or construction sites, you name it. We need to not only be able to do this once as a prototype demonstration, but to have all the capabilities that bring that together with environmental perception and understanding to make this athletic intelligence more capable and more adaptable to all kinds of different environments. This is not something that from today to tomorrow we’re going to see it being revolutionized—it will be gradual, steady progress because I think there’s still a lot of fundamental work that needs to be done. I feel like the mobility of legged robots has improved a lot over the last five years or so, and a lot of that progress has come from Boston Dynamics and also from your lab. Do you feel the same? Hutter: There has always been progress; the question is how much you can zoom in or zoom out. I think one thing has changed quite a bit, and that’s the availability of robotic systems to all kinds of different research groups. If you look back a decade, people had to build their own robots, they had to do the control for the robots, they had to work on the perception for the robots, and putting everything together like that makes it extremely fragile and very challenging to make something that works more than once. That has changed, which allows us to make faster progress. Marc Raibert (founder of the AI Institute) likes to show videos of mountain goats to illustrate what robots should be (or will be?) capable of. Does that kind of thing inspire you as well? Hutter: If you look at the animal kingdom, there’s so many things you can draw inspiration from. And a lot of this stuff is not only the cognitive side; it’s really about pairing the cognitive side with the mechanical intelligence of things like the simple-seeming hooves of mountain goats. But they’re really not that simple, they’re pretty complex in how they interact with the environment. Having one of these things and not the other won’t allow the animal to move across its challenging environment. It’s the same thing with the robots. It’s always been like this in robotics, where you push on the hardware side, and your controls become better, so you hit a hardware limitation. So both things have to evolve hand in hand. Otherwise, you have an over-dimensioned hardware system that you can’t use because you don’t have the right controls, or you have very sophisticated controls and your hardware system can’t keep up. How do you feel about all of the investment into humanoids right now, when quadrupedal robots with arms have been around for quite a while? Hutter: There’s a lot of ongoing research on quadrupeds with arms, and the nice thing is that these technologies that are developed for mobile systems with arms are the same technologies that are used in humanoids. It’s not different from a research point of view, it’s just a different form factor for the system. I think from an application point of view, the story from all of these companies making humanoids is that our environment has been adapted to humans quite a bit. A lot of tasks are at the height of a human standing, right? A quadruped doesn’t have the height to see things or to manipulate things on a table. It’s really application dependent, and I wouldn’t say that one system is better than the other.

  • Andrew Ng: Unbiggen AI
    by Eliza Strickland on 9. February 2022. at 15:31

    Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A. Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias. Andrew Ng on... What’s next for really big models The career advice he didn’t listen to Defining the data-centric AI movement Synthetic data Why Landing AI asks its customers to do the work The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way? Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions. When you say you want a foundation model for computer vision, what do you mean by that? Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them. What needs to happen for someone to build a foundation model for video? Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision. Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries. Back to top It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users. Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation. “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.” —Andrew Ng, CEO & Founder, Landing AI I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince. I expect they’re both convinced now. Ng: I think so, yes. Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.” Back to top How do you define data-centric AI, and why do you consider it a movement? Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data. When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline. The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up. You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them? Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn. When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set? Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system. “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.” —Andrew Ng For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance. Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training? Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle. One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way. When you talk about engineering the data, what do you mean exactly? Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity. For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow. Back to top What about using synthetic data, is that often a good solution? Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development. Do you mean that synthetic data would allow you to try the model on more data sets? Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category. “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.” —Andrew Ng Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data. Back to top To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment? Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data. One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory. How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up? Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations. In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists? So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work. Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains. Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement? Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it. Back to top This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”

  • How AI Will Change Chip Design
    by Rina Diane Caballar on 8. February 2022. at 14:00

    The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process. Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version. But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform. How is AI currently being used to design the next generation of chips? Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider. Heather GorrMathWorks Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI. What are the benefits of using AI for chip design? Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design. So it’s like having a digital twin in a sense? Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end. So, it’s going to be more efficient and, as you said, cheaper? Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering. We’ve talked about the benefits. How about the drawbacks? Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years. Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together. One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge. How can engineers use AI to better prepare and extract insights from hardware or sensor data? Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start. One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI. What should engineers and designers consider when using AI for chip design? Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team. How do you think AI will affect chip designers’ jobs? Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip. How do you envision the future of AI and chip design? Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.

  • Atomically Thin Materials Significantly Shrink Qubits
    by Dexter Johnson on 7. February 2022. at 16:12

    Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality. IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability. Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100. “We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.” The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit. Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C). Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another. As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance. In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates. “We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics. On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas. While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor. “What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.” This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits. “The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang. Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.

640px-NATO_OTAN_landscape_logo.svg-2627358850
BHTelecom_Logo
roaming
RIPE_NCC_Logo2015-1162707916
MON_4
mibo-logo
intel_logo-261037782
infobip
bhrt-zuto-logo-1595455966
elektro
eplus_cofund_text_to_right_cropped-1855296649
fmon-logo
h2020-2054048912
H2020_logo_500px-3116878222
huawei-logo-vector-3637103537
Al-Jazeera-Balkans-Logo-1548635272
previous arrowprevious arrow
next arrownext arrow