IEEE Novosti

IEEE Spectrum IEEE Spectrum

  • Deepfake Porn Is Leading to a New Protection Industry
    by Eliza Strickland on 15. Jula 2024. at 12:00

    It’s horrifyingly easy to make deepfake pornography of anyone thanks to today’s generative AI tools. A 2023 report by Home Security Heroes (a company that reviews identity-theft protection services) found that it took just one clear image of a face and less than 25 minutes to create a 60-second deepfake pornographic video—for free. The world took notice of this new reality in January when graphic deepfake images of Taylor Swift circulated on social media platforms, with one image receiving 47 million views before it was removed. Others in the entertainment industry, most notably Korean pop stars, have also seen their images taken and misused—but so have people far from the public spotlight. There’s one thing that virtually all the victims have in common, though: According to the 2023 report, 99 percent of victims are women or girls. This dire situation is spurring action, largely from women who are fed up. As one startup founder, Nadia Lee, puts it: “If safety tech doesn’t accelerate at the same pace as AI development, then we are screwed.” While there’s been considerable research on deepfake detectors, they struggle to keep up with deepfake generation tools. What’s more, detectors help only if a platform is interested in screening out deepfakes, and most deepfake porn is hosted on sites dedicated to that genre. “Our generation is facing its own Oppenheimer moment,” says Lee, CEO of the Australia-based startup That’sMyFace. “We built this thing”—that is, generative AI—”and we could go this way or that way with it.” Lee’s company is first offering visual-recognition tools to corporate clients who want to be sure their logos, uniforms, or products aren’t appearing in pornography (think, for example, of airline stewardesses). But her long-term goal is to create a tool that any woman can use to scan the entire Internet for deepfake images or videos bearing her own face. “If safety tech doesn’t accelerate at the same pace as AI development, then we are screwed.” —Nadia Lee, That’sMyFace Another startup founder had a personal reason for getting involved. Breeze Liu was herself a victim of deepfake pornography in 2020; she eventually found more than 800 links leading to the fake video. She felt humiliated, she says, and was horrified to find that she had little recourse: The police said they couldn’t do anything, and she herself had to identify all the sites where the video appeared and petition to get it taken down—appeals that were not always successful. There had to be a better way, she thought. “We need to use AI to combat AI,” she says. Liu, who was already working in tech, founded Alecto AI, a startup named after a Greek goddess of vengeance. The app she’s building lets users deploy facial recognition to check for wrongful use of their own image across the major social media platforms (she’s not considering partnerships with porn platforms). Liu aims to partner with the social media platforms so her app can also enable immediate removal of offending content. “If you can’t remove the content, you’re just showing people really distressing images and creating more stress,” she says. Liu says she’s currently negotiating with Meta about a pilot program, which she says will benefit the platform by providing automated content moderation. Thinking bigger, though, she says the tool could become part of the “infrastructure for online identity,” letting people check also for things like fake social media profiles or dating site profiles set up with their image. Can Regulations Combat Deepfake Porn? Removing deepfake material from social media platforms is hard enough—removing it from porn platforms is even harder. To have a better chance of forcing action, advocates for protection against image-based sexual abuse think regulations are required, though they differ on what kind of regulations would be most effective. Susanna Gibson started the nonprofit MyOwn after her own deepfake horror story. She was running for a seat in the Virginia House of Delegates in 2023 when the official Republican party of Virginia mailed out sexual imagery of her that had been created and shared without her consent, including, she says, screenshots of deepfake porn. After she narrowly lost the election, she devoted herself to leading the legislative charge in Virginia and then nationwide to fight back against image-based sexual abuse. “The problem is that each state is different, so it’s a patchwork of laws. And some are significantly better than others.” —Susanna Gibson, MyOwn Her first win was a bill that the Virginia governor signed in April to expand the state’s existing “revenge porn” law to cover more types of imagery. “It’s nowhere near what I think it should be, but it’s a step in the right direction of protecting people,” Gibson says. While several federal bills have been introduced to explicitly criminalize the nonconsensual distribution of intimate imagery or deepfake porn in particular, Gibson says she doesn’t have great hopes of those bills becoming the law of the land. There’s more action at the state level, she says. “Right now there are 49 states, plus D.C., that have legislation against nonconsensual distribution of intimate imagery,” Gibson says. “But the problem is that each state is different, so it’s a patchwork of laws. And some are significantly better than others.” Gibson notes that almost all of the laws require proof that the perpetrator acted with intent to harass or intimidate the victim, which can be very hard to prove. Among the different laws, and the proposals for new laws, there’s considerable disagreement about whether the distribution of deepfake porn should be considered a criminal or civil matter. And if it’s civil, which means that victims have the right to sue for damages, there’s disagreement about whether the victims should be able to sue the individuals who distributed the deepfake porn or the platforms that hosted it. Beyond the United States is an even larger patchwork of policies. In the United Kingdom, the Online Safety Act passed in 2023 criminalized the distribution of deepfake porn, and an amendment proposed this year may criminalize its creation as well. The European Union recently adopted a directive that combats violence and cyberviolence against women, which includes the distribution of deepfake porn, but member states have until 2027 to implement the new rules. In Australia, a 2021 law made it a civil offense to post intimate images without consent, but a newly proposed law aims to make it a criminal offense, and also aims to explicitly address deepfake images. South Korea has a law that directly addresses deepfake material, and unlike many others, it doesn’t require proof of malicious intent. China has a comprehensive law restricting the distribution of “synthetic content,” but there’s been no evidence of the government using the regulations to crack down on deepfake porn. While women wait for regulatory action, services from companies like Alecto AI and That’sMyFace may fill the gaps. But the situation calls to mind the rape whistles that some urban women carry in their purses so they’re ready to summon help if they’re attacked in a dark alley. It’s useful to have such a tool, sure, but it would be better if our society cracked down on sexual predation in all its forms, and tried to make sure that the attacks don’t happen in the first place.

  • Inside the Three-Way Race to Create the Most Widely Used Laser
    by Julianne Pepitone on 14. Jula 2024. at 13:00

    The semiconductor laser, invented more than 60 years ago, is the foundation of many of today’s technologies including barcode scanners, fiber-optic communications, medical imaging, and remote controls. The tiny, versatile device is now an IEEE Milestone. The possibilities of laser technology had set the scientific world alight in 1960, when the laser, long described in theory, was first demonstrated. Three U.S. research centers unknowingly began racing each other to create the first semiconductor version of the technology. The three—General Electric, IBM’s Thomas J. Watson Research Center, and the MIT Lincoln Laboratory—independently reported the first demonstrations of a semiconductor laser, all within a matter of days in 1962. The semiconductor laser was dedicated as an IEEE Milestone at three ceremonies, with a plaque marking the achievement installed at each facility. The Lincoln Lab event is available to watch on demand. Invention of the laser spurs a three-way race The core concept of the laser dates back to 1917, when Albert Einstein theorized about “stimulated emission.” Scientists already knew electrons could absorb and emit light spontaneously, but Einstein posited that electrons could be manipulated to emit at a particular wavelength. It took decades for engineers to turn his theory into reality. In the late 1940s, physicists were working to improve the design of a vacuum tube used by the U.S. military in World War II to detect enemy planes by amplifying their signals. Charles Townes, a researcher at Bell Labs in Murray Hill, N.J., was one of them. He proposed creating a more powerful amplifier that passed a beam of electromagnetic waves through a cavity containing gas molecules. The beam would stimulate the atoms in the gas to release their energy exactly in step with the beam’s waves, creating energy that allowed it to exit the cavity as a much more powerful beam. In 1954 Townes, then a physics professor at Columbia, created the device, which he called a “maser” (short for microwave amplification by stimulated emission of radiation). It would prove an important precursor to the laser. Many theorists had told Townes his device couldn’t possibly work, according to an article published by the American Physical Society. Once it did work, the article says, other researchers quickly replicated it and began inventing variations. Townes and other engineers figured that by harnessing higher-frequency energy, they could create an optical version of the maser that would generate beams of light. Such a device potentially could generate more powerful beams than were possible with microwaves, but it also could create beams of varied wavelengths, from the infrared to the visible. In 1958 Townes published a theoretical outline of the “laser.” “It’s amazing what these … three organizations in the Northeast of the United States did 62 years ago to provide all this capability for us now and into the future.” Several teams worked to fabricate such a device, and in May 1960 Theodore Maiman, a researcher at Hughes Research Lab, in Malibu, Calif., built the first working laser. Maiman’s paper, published in Nature three months later, described the invention as a high-power lamp that flashed light onto a ruby rod placed between two mirrorlike silver-coated surfaces. The optical cavity created by the surfaces oscillated the light produced by the ruby’s fluorescence, achieving Einstein’s stimulated emission. The basic laser was now a reality. Engineers quickly began creating variations. Many perhaps were most excited by the potential for a semiconductor laser. Semiconducting material can be manipulated to conduct electricity under the right conditions. By its nature, a laser made from semiconducting material could pack all the required elements of a laser—a source of light generation and amplification, lenses, and mirrors—into a micrometer-scale device. “These desirable attributes attracted the imagination of scientists and engineers” across disciplines, according to the Engineering and Technology History Wiki. A pair of researchers discovered in 1962 that an existing material was a great laser semiconductor: gallium arsenide. Gallium-arsenide was ideal for a semiconductor laser On 9 July 1962, MIT Lincoln Laboratory researchers Robert Keyes and Theodore Quist told the audience at the Solid State Device Research Conference that they were developing an experimental semiconductor laser, IEEE Fellow Paul W. Juodawlkis said during his speech at the IEEE Milestone dedication ceremony at MIT. Juodawlkis is director of the MIT Lincoln Laboratory’s quantum information and integrated nanosystems group. The laser wasn’t yet emitting a coherent beam, but the work was advancing quickly, Keyes said. And then Keyes and Quist shocked the audience: They said they could prove that nearly 100 percent of the electrical energy injected into a gallium-arsenide semiconductor could be converted into light. MIT’s Lincoln Laboratory’s [from left] Robert Keyes, Theodore M. Quist, and Robert Rediker testing their laser on a TV set.MIT Lincoln Laboratory No one had made such a claim before. The audience was incredulous—and vocally so. “When Bob [Keyes] was done with his talk, one of the audience members stood up and said, ‘Uh, that violates the second law of thermodynamics,’” Juodawlkis said. The audience erupted into laughter. But physicist Robert N. Hall—a semiconductor expert working at GE’s research laboratory in Schenectady, N.Y.—silenced them. “Bob Hall stood up and explained why it didn’t violate the second law,” Juodawlkis said. “It created a real buzz.” Several teams raced to develop a working semiconductor laser. The margin of victory ultimately came down to a few days. A ‘striking coincidence’ A semiconductor laser is made with a tiny semiconductor crystal that is suspended inside a glass container filled with liquid nitrogen, which helps keep the device cool. General Electric Research and Development Center/AIP Emilio Segrè Visual Archives Hall returned to GE, inspired by Keyes and Quist’s speech, certain that he could lead a team to build an efficient, effective gallium arsenide laser. He had already spent years working with semiconductors and invented what is known as a “p-i-n” diode rectifier. Using a crystal made of purified geranium, a semiconducting material, the rectifier could convert AC to DC—a crucial development for solid-state semiconductors used in electrical transmission. That experience helped accelerate the development of semiconductor lasers. Hall and his team used a similar setup to the “p-i-n” rectifier. They built a diode laser that generated coherent light from a gallium arsenide crystal one-third of one millimeter in size, sandwiched into a cavity between two mirrors so the light bounced back and forth repeatedly. The news of the invention came out in the November 1, 1962, Physical Review Letters. As Hall and his team worked, so did researchers at the Watson Research Center, in Yorktown Heights, N.Y. In February 1962 Marshall I. Nathan, an IBM researcher who previously worked with gallium arsenide, received a mandate from his department director, according to ETHW: Create the first gallium arsenide laser. Nathan led a team of researchers including William P. Dumke, Gerald Burns, Frederick H. Dill, and Gordon Lasher, to develop the laser. They completed the task in October and hand-delivered a paper outlining their work to Applied Physics Letters, which published it on 4 October 1962. Over at MIT’s Lincoln Laboratory, Quist, Keyes, and their colleague Robert Rediker published their findings in Applied Physics Letters on 5 November 1962. It had all happened so quickly that a New York Times article marveled about the “striking coincidence,” noting that IBM officials didn’t know about GE’s success until GE sent invitations to a news conference. An MIT spokesperson told the Times that GE had achieved success “a couple days or a week” before its own team. Both IBM and GE had applied for U.S. patents in October, and both were ultimately awarded. All three facilities now have been honored by IEEE for their work. “Perhaps nowhere else has the semiconductor laser had greater impact than in communications,” according to an ETHW entry, “where every second, a semiconductor laser quietly encodes the sum of human knowledge into light, enabling it to be shared almost instantaneously across oceans and space.” IBM Research’s semiconductor laser used a gallium arsenide p-n diode, which was patterned into a small optical cavity with an etched mesa structure.IBM Juodawlkis, speaking at the Lincoln Lab ceremony, noted that semiconductor lasers are used “every time you make a cellphone call” or “Google silly cat videos.” “If we look in the broader world,” he said, “semiconductor lasers are really one of the founding pedestals of the information age.” He concluded his speech with a quote summing up a 1963 Time magazine article: “If the world is ever afflicted with a choice between thousands of different TV programs, a few diodes with their feeble beams of infrared light might carry them all at once.” That was a “prescient foreshadowing of what semiconductor lasers have enabled,” Juodawlkis said. “It’s amazing what these … three organizations in the Northeast of the United States did 62 years ago to provide all this capability for us now and into the future.” Plaques recognizing the technology are now displayed at GE, the Watson Research Center, and the Lincoln Laboratory. They read: In the autumn of 1962, General Electric’s Schenectady and Syracuse facilities, IBM Thomas J. Watson Research Center, and MIT Lincoln Laboratory each independently reported the first demonstrations of the semiconductor laser. Smaller than a grain of rice, powered using direct current injection, and available at wavelengths spanning the ultraviolet to the infrared, the semiconductor laser became ubiquitous in modern communications, data storage, and precision measurement systems. The IEEE Boston, New York, and Schenectady sections sponsored the nomination.Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world.

  • Soft Robot Can Amputate and Reattach Its Own Legs
    by Evan Ackerman on 13. Jula 2024. at 12:00

    Among the many things that humans cannot do (without some fairly substantial modification) is shifting our body morphology around on demand. It sounds a little extreme to be talking about things like self-amputation, and it is a little extreme, but it’s also not at all uncommon for other animals to do—lizards can disconnect their tails to escape a predator, for example. And it works in the other direction, too, with animals like ants adding to their morphology by connecting to each other to traverse gaps that a single ant couldn’t cross alone. In a new paper, roboticists from The Faboratory at Yale University have given a soft robot the ability to detach and reattach pieces of itself, editing its body morphology when necessary. It’s a little freaky to watch, but it kind of makes me wish I could do the same thing. Faboratory at Yale These are fairly standard soft-bodied silicon robots that use asymmetrically stiff air chambers that inflate and deflate (using a tethered pump and valves) to generate a walking or crawling motion. What’s new here are the joints, which rely on a new material called a bicontinuous thermoplastic foam (BTF) to form a supportive structure for a sticky polymer that’s solid at room temperature but can be easily melted. The BTF acts like a sponge to prevent the polymer from running out all over the place when it melts, and means that you can pull two BTF surfaces apart by melting the joint, and stick them together again by reversing the procedure. The process takes about 10 minutes and the resulting joint is quite strong. It’s also good for a couple of hundred detach/re-attach cycles before degrading. It even stands up to dirt and water reasonably well. Faboratory at Yale This kind of thing has been done before with mechanical connections and magnets and other things like that—getting robots to attach to and detach from other robots is a foundational technique for modular robotics, after all. But these systems are inherently rigid, which is bad for soft robots, whose whole thing is about not being rigid. It’s all very preliminary, of course, because there are plenty of rigid things attached to these robots with tubes and wires and stuff. And there’s no autonomy or payloads here either. That’s not the point, though—the point is the joint, which (as the researchers point out) is “the first instantiation of a fully soft reversible joint” resulting in the “potential for soft artificial systems [that can] shape change via mass addition and subtraction.” “Self-Amputating and Interfusing Machines,” by Bilige Yang, Amir Mohammadi Nasab, Stephanie J. Woodman, Eugene Thomas, Liana G. Tilton, Michael Levin, and Rebecca Kramer-Bottiglio from Yale, was published in May in Advanced Materials. .

  • Video Friday: Unitree Talks Robots
    by Evan Ackerman on 12. Jula 2024. at 16:21

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDS IROS 2024: 14–18 October 2024, ABU DHABI, UAE ICSR 2024: 23–26 October 2024, ODENSE, DENMARK Cybathlon 2024: 25–27 October 2024, ZURICH Enjoy today’s videos! At ICRA 2024, Spectrum editor Evan Ackerman sat down with Unitree Founder and CEO Xingxing Wang and Tony Yang, VP of Business Development, to talk about the company’s newest humanoid, the G1 model. [ Unitree ] SACRIFICE YOUR BODY FOR THE ROBOT [ WVUIRL ] From navigating uneven terrain outside the lab to pure vision perception, GR-1 continues to push the boundaries of what’s possible. [ Fourier ] Aerial manipulation has gained interest for completing high-altitude tasks that are challenging for human workers, such as contact inspection and defect detection. This letter addresses a more general and dynamic task: simultaneously tracking time-varying contact force and motion trajectories on tangential surfaces. We demonstrate the approach on an aerial calligraphy task using a novel sponge pen design as the end-effector. [ CMU ] LimX Dynamics Biped Robot P1 was kicked and hit: Faced with random impacts in a crowd, P1 with its new design once again showcased exceptional stability as a mobility platform. [ LimX Dynamics ] Thanks, Ou Yan! This is from ICRA 2018, but it holds up pretty well in the novelty department. [ SNU INRoL ] I think someone needs to crank the humor setting up on this one. [ Deep Robotics ] The paper summarizes the work at the Micro Air Vehicle Laboratory on end-to-end neural control of quadcopters. A major challenge in bringing these controllers to life is the “reality gap” between the real platform and the training environment. To address this, we combine online identification of the reality gap with pre-trained corrections through a deep neural controller, which is orders of magnitude more efficient than traditional computation of the optimal solution. [ MAVLab ] This is a dedicated Track Actuator from HEBI Robotics. Why they didn’t just call it a “tracktuator” is beyond me. [ HEBI Robotics ] Menteebot can navigate complex environments by combining a 3D model of the world with a dynamic obstacle map. On the first day in a new location, Menteebot generates the 3D model by following a person who shows the robot around. [ Mentee Robotics ] Here’s that drone with a 68kg payload and 70km range you’ve always wanted. [ Malloy ] AMBIDEX is a dual-armed robot with an innovative mechanism developed for safe coexistence with humans. Based on an innovative cable structure, it is designed to be both strong and stable. [ NAVER Labs ] As quadrotors take on an increasingly diverse range of roles, researchers often need to develop new hardware platforms tailored for specific tasks, introducing significant engineering overhead. In this article, we introduce the UniQuad series, a unified and versatile quadrotor hardware platform series that offers high flexibility to adapt to a wide range of common tasks, excellent customizability for advanced demands, and easy maintenance in case of crashes. [ HKUST ] The video demonstrates the field testing of a 43 kg (95 lb) amphibious cycloidal propeller unmanned underwater vehicle (Cyclo-UUV) developed at the Advanced Vertical Flight Laboratory, Texas A&M University. The vehicle utilizes a combination of cycloidal propellers (or cyclo-propellers), screw propellers, and tank treads for operations on land and underwater. [ TAMU ] The “pill” (the package hook) on Wing’s delivery drones is a crucial component to our aircraft! Did you know our package hook is designed to be aerodynamic and has stable flight characteristics, even at 65 mph? [ Wing ] Happy 50th to robotics at ABB! [ ABB ] This JHU Center for Functional Anatomy & Evolution Seminar is by Chen Li, on Terradynamics of Animals & Robots in Complex Terrain. [ JHU ]

  • Food Service Robots Just Need the Right Ingredients
    by Evan Ackerman on 11. Jula 2024. at 18:51

    Food prep is one of those problems that seems like it should be solvable by robots. It’s a predictable, repetitive, basic manipulation task in a semi-structured environment—seems ideal, right? And obviously there’s a huge need, because human labor is expensive and getting harder and harder to find in these contexts. There are currently over a million unfilled jobs in the food industry in the United States, and even with jobs that are filled, the annual turnover rate is 150 percent (meaning a lot of workers don’t even last a year). Food prep seems like a great opportunity for robots, which is why Chef Robotics and a handful of other robotics companies tackled it a couple years ago by bringing robots to fast casual restaurants like Chipotle or Sweetgreen, where you get served a custom-ish meal from a selection of ingredients at a counter. But this didn’t really work out, for a couple of reasons. First, doing things that are mostly effortless for humans are inevitably extremely difficult for robots. And second, humans actually do a lot of useful things in a restaurant context besides just putting food onto plates, and the robots weren’t up for all of those things. Still, Chef Robotics founder and CEO Rajat Bhageria wasn’t ready to let this opportunity go. “The food market is arguably the biggest market that’s tractable for AI today,” he told IEEE Spectrum. And with a bit of a pivot away from the complicated mess of fast casual restaurants, Chef Robotics has still managed to prepare over 20 million meals thanks to autonomous robot arms deployed all over North America. Without knowing it, you may even have eaten such a meal. “The hard thing is, can you pick fast? Can you pick consistently? Can you pick the right portion size without spilling? And can you pick without making it look like the food was picked by a machine?” —Rajat Bhageria, Chef Robotics When we spoke with Bhageria, he explained that there are three basic tasks involved in prepared food production: prep (tasks like chopping ingredients), the actual cooking process, and then assembly (or plating). Of these tasks, prep scales pretty well with industrial automation in that you can usually order pre-chopped or mixed ingredients, and cooking also scales well since you can cook more with only a minimal increase in effort just by using a bigger pot or pan or oven. What doesn’t scale well is the assembly, especially when any kind of flexibility or variety is required. You can clearly see this in action at any fast casual restaurant, where a couple of people are in the kitchen cooking up massive amounts of food while each customer gets served one at a time. So with that bottleneck identified, let’s throw some robots at the problem, right? And that’s exactly what Chef Robotics did, explains Bhageria: “we went to our customers, who said that their biggest pain point was labor, and the most labor is in assembly, so we said, we can help you solve this.” Chef Robotics started with fast casual restaurants. They weren’t the first to try this—many other robotics companies had attempted this before, with decidedly mixed results. “We actually had some good success in the early days selling to fast casual chains,” Bhageria says, “but then we had some technical obstacles. Essentially, if we want to have a human-equivalent system so that we can charge a human-equivalent service fee for our robot, we need to be able to do every ingredient. You’re either a full human equivalent, or our customers told us it wouldn’t be useful.” Part of the challenge is that training robots do perform all of the different manipulations required for different assembly tasks requires different kinds of real world data. That data simply doesn’t exist—or, if it does, any company that has it knows what it’s worth and isn’t sharing. You can’t easily simulate this kind of data, because food can be gross and difficult to handle, whether it’s gloopy or gloppy or squishy or slimy or unpredictably deformable in some other way, and you really need physical experience to train a useful manipulation model. Setting fast casual restaurants aside for a moment, what about food prep situations where things are as predictable as possible, like mass-produced meals? We’re talking about food like frozen dinners, that have a handful of discrete ingredients packed into trays at factory scale. Frozen meal production relies on automation rather than robotics because the scale is such that the cost of dedicated equipment can be justified. There’s a middle ground, though, where robots have found (some) opportunity: When you need to produce a high volume of the same meal, but that meal changes regularly. For example, think of any kind of pre-packaged meal that’s made in bulk, just not at frozen-food scale. It’s an opportunity for automation in a structured environment—but with enough variety that actual automation isn’t cost effective. Suddenly, robots and their tiny bit of flexible automation have a chance to be a practical solution. “We saw these long assembly lines, where humans were scooping food out of big tubs and onto individual trays,” Bhageria says. “They do a lot of different meals on these lines; it’s going to change over and they’re going to do different meals throughout the week. But at any given moment, each person is doing one ingredient, and maybe on a weekly basis, that person would do six ingredients. This was really compelling for us because six ingredients is something we can bootstrap in a lab. We can get something good enough and if we can get something good enough, then we can ship a robot, and if we can ship a robot to production, then we will get real world training data.” Chef Robotics has been deploying robot modules that they can slot into existing food assembly lines in place of humans without any retrofitting necessary. The modules consist of six degree of freedom arms wearing swanky IP67 washable suits. To handle different kinds of food, the robots can be equipped with a variety of different utensils (and their accompanying manipulation software strategies). Sensing includes a few depth cameras, as well as a weight-sensing platform for the food tray to ensure consistent amounts of food are picked. And while arms with six degrees of freedom may be overkill for now, eventually the hope is that they’ll be able to handle more complex food like asparagus, where you need to do a little bit more than just scoop. While Chef Robotics seems to have a viable business here, Bhageria tells us that he keeps coming back to that vision of robots being useful in fast casual restaurants, and eventually, robots making us food in our homes. Making that happen will require time, experience, technical expertise, and an astonishing amount of real-world training data, which is the real value behind those 20 million robot-prepared meals (and counting). The more robots the company deploys, the more data they collect, which will allow them to train their food manipulation models to handle a wider variety of ingredients to open up even more deployments. Their robots, Chef’s website says, “essentially act as data ingestion engines to improve our AI models.” The next step is likely ghost kitchens where the environment is still somewhat controlled and human interaction isn’t necessary, followed by deployments in commercial kitchens more broadly. But even that won’t be enough for Bhageria, who wants robots that can take over from all of the drudgery in food service: “I’m really excited about this vision,” he says. “How do we deploy hundreds of millions of robots all over the world that allow humans to do what humans do best?”

  • Edith Clarke: Architect of Modern Power Distribution
    by Amanda Davis on 10. Jula 2024. at 18:00

    Edith Clarke was a powerhouse in practically every sense of the word. From the start of her career at General Electric in 1922, she was determined to develop stable, more reliable power grids. And Clarke succeeded, playing a critical role in the rapid expansion of the North American electric grid during the 1920s and ’30s. During her first years at GE she invented what came to be known as the Clarke calculator. The slide rule let engineers solve equations involving electric current, voltage, and impedance 10 times faster than by hand. Her calculator and the power distribution methods she developed paved the way for modern grids. She also worked on hydroelectric power plant designs, according to a 2022 profile in Hydro Review. She broke down barriers during her life. In 1919 she became the first woman to earn a master’s degree in electrical engineering from MIT. Three years later, she became the first woman in the United States to work as an electrical engineer. Her life is chronicled in Edith Clarke: Trailblazer in Electrical Engineering. Written by Paul Lief Rosengren, the book is part of IEEE-USA’s Famous Women Engineers in History series. Becoming the first female electrical engineer Clarke was born in 1883 in the small farming community of Ellicott City, Md. At the time, few women attended college, and those who did tended to be barred from taking engineering classes. She was orphaned at 12, according to Sandy Levins’s Wednesday’s Women website. After high school, Clarke used a small inheritance from her parents to attend Vassar, a women’s college in Poughkeepsie, N.Y., where she earned a bachelor’s degree in mathematics and astronomy in 1908. Those degrees were the closest equivalents to an engineering degree available to Vassar students at the time. In 1912 Clarke was hired by AT&T in New York City as a computing assistant. She worked on calculations for transmission lines and electric circuits. During the next few years, she developed a passion for power engineering. She enrolled at MIT in 1918 to further her career, according to her Engineering and Technology History Wiki biography. After graduating, though, she had a tough time finding a job in the man-dominated field. After months of applying with no luck, she landed a job at GE in Boston, where she did more or less the same work as she did in her previous role at AT&T, except now as a supervisor. Clarke led a team of computers—employees (mainly women) who performed long, tedious calculations by hand before computing machines became widely available. The Clarke Calculator let engineers solve equations involving electric current, voltage, and impedance 10 times faster than by hand. Clarke was granted a U.S. patent for the slide rule in 1925.Science History Images/Alamy While at GE she developed her calculator, eventually earning a patent for it in 1925. In 1921 Clarke left GE to become a full-time physics professor at Constantinople Women’s College, in what is now Istanbul, according to a profile by the Edison Tech Center. But she returned to GE a year later when it offered her a salaried electrical engineering position in its Central Station Engineering department in Boston. Although Clarke didn’t earn the same pay or enjoy the same prestige as her male colleagues, the new job launched her career. U.S. power grid pioneer According to Rosengren’s book, during Clarke’s time at GE, transmission lines were getting longer and larger power loads were increasing the chances of instability. Mathematical models for assessing grid reliability at the time were better suited to smaller systems. To model systems and power behavior, Clarke created a technique using symmetrical components—a method of converting three-phase unbalanced systems into two sets of balanced phasors and a set of single-phase phasors. The method allowed engineers to analyze the reliability of larger systems. Vivien Kellems [left] and Clarke, two of the first women to become a full voting member of the American Institute of Electrical Engineers, meeting for the first time in GE’s laboratories in Schenectady, N.Y. Bettmann/Getty Images Clarke described the technique in “Steady-State Stability in Transmission Systems,” which was published in 1925 in A.I.E.E. Transactions, a journal of the American Institute of Electrical Engineers, one of IEEE’s predecessors. Clarke had scored another first: the first woman to have her work appear in the journal. In the 1930s, Clarke designed the turbine system for the Hoover Dam, a hydroelectric power plant on the Colorado River between Nevada and Arizona. The electricity it produced was stored in massive GE generators. Clarke’s pioneering system later was installed in similar power plants throughout the western United States. Clarke retired in 1945 and bought a farm in Maryland. She came out of retirement two years later and became the first female electrical engineering professor in the United States when she joined the University of Texas, Austin. She retired for good in 1956 and returned to Maryland, where she died in 1959. First female IEEE Fellow Clarke’s pioneering work earned her several recognitions never before bestowed on a woman. She was the first woman to become a full voting member of the AIEE and its first female Fellow, in 1948. She received the 1954 Society of Women Engineers Achievement Award “in recognition of her many original contributions to stability theory and circuit analysis.” She was posthumously elected in 2015 to the National Inventors Hall of Fame.

  • Sea Drones in the Russia-Ukraine War Inspire New Tactics
    by Bryan Clark on 10. Jula 2024. at 12:00

    Against all odds, Ukraine is still standing almost two and a half years after Russia’s massive 2022 invasion. Of course, hundreds of billions of dollars in Western support as well as Russian errors have helped immensely, but it would be a mistake to overlook Ukraine’s creative use of new technologies, particularly drones. While uncrewed aerial vehicles have grabbed most of the attention, it is naval drones that could be the key to bringing Russian president Vladimir Putin to the negotiating table. These naval-drone operations in the Black Sea against Russian warships and other targets have been so successful that they are prompting, in London, Paris, Washington, and elsewhere, fundamental reevaluations of how drones will affect future naval operations. In August, 2023, for example, the Pentagon launched the billion-dollar Replicator initiative to field air and naval drones (also called sea drones) on a massive scale. It’s widely believed that such drones could be used to help counter a Chinese invasion of Taiwan. And yet Ukraine’s naval drones initiative grew out of necessity, not grand strategy. Early in the war, Russia’s Black Sea fleet launched cruise missiles into Ukraine and blockaded Odesa, effectively shutting down Ukraine’s exports of grain, metals, and manufactured goods. The missile strikes terrorized Ukrainian citizens and shut down the power grid, but Russia’s blockade was arguably more consequential, devastating Ukraine’s economy and creating food shortages from North Africa to the Middle East. With its navy seized or sunk during the war’s opening days, Ukraine had few options to regain access to the sea. So Kyiv’s troops got creative. Lukashevich Ivan Volodymyrovych, a brigadier general in the Security Service of Ukraine, the country’s counterintelligence agency, proposed building a series of fast, uncrewed attack boats. In the summer of 2022, the service, which is known by the acronym SBU, began with a few prototype drones. These quickly led to a pair of naval drones that, when used with commercial satellite imagery, off-the-shelf uncrewed aircraft, and Starlink terminals, gave Ukrainian operators the means to sink or disable a third of Russia’s Black Sea Fleet, including the flagship Moskva and most of the fleet’s cruise-missile-equipped warships. To protect their remaining vessels, Russian commanders relocated the Black Sea Fleet to Novorossiysk, 300 kilometers east of Crimea. This move sheltered the ships from Ukrainian drones and missiles, but it also put them too far away to threaten Ukrainian shipping or defend the Crimean Peninsula. Kyiv has exploited the opening by restoring trade routes and mounting sustained airborne and naval drone strikes against Russian bases on Crimea and the Kerch Strait Bridge connecting the peninsula with Russia. How Maguras and Sea Babies Hunt and Attack The first Ukrainian drone boats were cobbled together with parts from jet skis, motorboats, and off-the-shelf electronics. But within months, manufacturers working for the Ukraine defense ministry and SBU fielded several designs that proved their worth in combat, most notably the Magura V5 and the Sea Baby. Carrying a 300-kilogram warhead, on par with that of a heavyweight torpedo, the Magura V5 is a hunter-killer antiship drone designed to work in swarms that confuse and overwhelm a ship’s defenses. Equipped with Starlink terminals, which connect to SpaceX’s Starlink satellites, and GPS, a group of about three to five Maguras likely moves autonomously to a location near the potential target. From there, operators can wait until conditions are right and then attack the target from multiple angles using remote control and video feeds from the vehicles. A Ukrainian Magura V5 hunter-killer sea drone was demonstrated at an undisclosed location in Ukraine on 13 April 2024. The domed pod toward the bow, which can rotate from side to side, contains a thermal camera used for guidance and targeting.Valentyn Origrenko/Reuters/Redux Larger than a Magura, the Sea Baby is a multipurpose vehicle that can carry about 800 kg of explosives, which is close to twice the payload of a Tomahawk cruise missile. A Sea Baby was used in 2023 to inflict substantial damage to the Kerch Strait Bridge. A more recent version carries a rocket launcher that Ukraine troops plan to use against Russian forces along the Dnipro River, which flows through eastern Ukraine and has often formed the frontline in that part of the country. Like a Magura, a Sea Baby is likely remotely controlled using Starlink and GPS. In addition to attack, it’s also equipped for surveillance and logistics. Russia reduced the threat to its ships by moving them out of the region, but fixed targets like the Kerch Strait Bridge remain vulnerable to Ukrainian sea drones. To try to protect these structures from drone onslaughts, Russian commanders are taking a “kitchen sink” approach, submerging hulks around bridge supports, fielding more guns to shoot at incoming uncrewed vessels, and jamming GPS and Starlink around the Kerch Strait. Ukrainian service members demonstrated the portable, ruggedized consoles used to remotely guide the Magura V5 naval drones in April 2024.Valentyn Origrenko/Reuters/Redux While the war remains largely stalemated in the country’s north, Ukraine’s naval drones could yet force Russia into negotiations. The Crimean Peninsula was Moscow’s biggest prize from its decade-long assault on Ukraine. If the Kerch Bridge is severed and the Black Sea Fleet pushed back into Russian ports, Putin may need to end the fighting to regain control over Crimea. Why the U.S. Navy Embraced the Swarm Ukraine’s small, low-cost sea drones are offering a compelling view of future tactics and capabilities. But recent experiences elsewhere in the world are highlighting the limitations of drones for some crucial tasks. For example, for protecting shipping from piracy or stopping trafficking and illegal fishing, drones are less useful. Before the Ukraine war, efforts by the U.S. Department of Defense to field surface sea drones focused mostly on large vehicles. In 2015, the Defense Advanced Research Projects Agency started, and the U.S. Navy later continued, a project that built two uncrewed surface vessels, called Sea Hunter and Sea Hawk. These were 130-tonne sea drones capable of roaming the oceans for up to 70 days while carrying payloads of thousands of pounds each. The point was to demonstrate the ability to detect, follow, and destroy submarines. The Navy and the Pentagon’s secretive Strategic Capabilities Office followed with the Ghost Fleet Overlord uncrewed vessel programs, which produced four larger prototypes designed to carry shipping-container-size payloads of missiles, sensors, or electronic countermeasures. The U.S. Navy’s newly created Uncrewed Surface Vessel Division 1 ( USVDIV-1) completed a deployment across the Pacific Ocean last year with four medium and large sea drones: Sea Hunter and Sea Hawk and two Overlord vessels, Ranger and Mariner. The five-month deployment from Port Hueneme, Calif., took the vessels to Hawaii, Japan, and Australia, where they joined in annual exercises conducted by U.S. and allied navies. The U.S. Navy continues to assess its drone fleet through sea trials lasting from several days to a few months. The Sea Hawk is a U.S. Navy trimaran drone vessel designed to find, pursue, and attack submarines. The 130-tonne ship, photographed here in October of 2023 in Sydney Harbor, was built to operate autonomously on missions of up to 70 days, but it can also accommodate human observers on board. Ensign Pierson Hawkins/U.S. Navy In contrast with Ukraine’s small sea drones, which are usually remotely controlled and operate outside shipping lanes, the U.S. Navy’s much larger uncrewed vessels have to follow the nautical rules of the road. To navigate autonomously, these big ships rely on robust onboard sensors, processing for computer vision and target-motion analysis, and automation based on predictable forms of artificial intelligence, such as expert- or agent-based algorithms rather than deep learning. But thanks to the success of the Ukrainian drones, the focus and energy in sea drones are rapidly moving to the smaller end of the scale. The U.S. Navy initially envisioned platforms like Sea Hunter conducting missions in submarine tracking, electronic deception, or clandestine surveillance far out at sea. And large drones will still be needed for such missions. However, with the right tactics and support, a group of small sea drones can conduct similar missions as well as other vital tasks. For example, though they are constrained in speed, maneuverability, and power generation, solar- or sail-powered drones can stay out for months with little human intervention. The earliest of these are wave gliders like the Liquid Robotics (a Boeing company) SHARC, which has been conducting undersea and surface surveillance for the U.S. Navy for more than a decade. Newer designs like the Saildrone Voyager and Ocius Blue Bottle incorporate motors and additional solar or diesel power to haul payloads such as radars, jammers, decoys, or active sonars. The Ocean Aero Triton takes this model one step further: It can submerge, to conduct clandestine surveillance or a surprise attack, or to avoid detection. The Triton, from Ocean Aero in Gulfport, Miss., is billed as the world’s only autonomous sea drone capable of both cruising underwater and sailing on the surface. Ocean Aero Ukraine’s success in the Black Sea has also unleashed a flurry of new small antiship attack drones. USVDIV-1 will use the GARC from Maritime Applied Physics Corp. to develop tactics. The Pentagon’s Defense Innovation Unit has also begun purchasing drones for the China-focused Replicator initiative. Among the likely craft being evaluated are fast-attack sea drones from Austin, Texas–based Saronic. Behind the soaring interest in small and inexpensive sea drones is the changing value proposition for naval drones. As recently as four years ago, military planners were focused on using them to replace crewed ships in “dull, dirty, and dangerous” jobs. But now, the thinking goes, sea drones can provide scale, adaptability, and resilience across each link in the “kill chain” that extends from detecting a target to hitting it with a weapon. Today, to attack a ship, most navies generally have one preferred sensor (such as a radar system), one launcher, and one missile. But what these planners are now coming to appreciate is that a fleet of crewed surface ships with a collection of a dozen or two naval drones would offer multiple paths to both find that ship and attack it. These craft would also be less vulnerable, because of their dispersion. Defending Taiwan by Surrounding It With a “Hellscape” U.S. efforts to protect Taiwan may soon reflect this new value proposition. Many classified and unclassified war games suggest Taiwan and its allies could successfully defend the island—but at costs high enough to potentially dissuade a U.S. president from intervening on Taiwan’s behalf. With U.S. defense budgets capped by law and procurement constrained by rising personnel and maintenance costs, substantially growing or improving today’s U.S. military for this specific purpose is unrealistic. Instead, commanders are looking for creative solutions to slow or stop a Chinese invasion without losing most U.S. forces in the process. Naval drones look like a good—and maybe the best— solution. The Taiwan Strait is only 160 kilometers (100 miles) wide, and Taiwan’s coastline offers only a few areas where large numbers of troops could come ashore. U.S. naval attack drones positioned on the likely routes could disrupt or possibly even halt a Chinese invasion, much as Ukrainian sea drones have denied Russia access to the western Black Sea and, for that matter, Houthi-controlled drones have sporadically closed off large parts of the Red Sea in the Middle East. Rather than killer robots seeking out and destroying targets, the drones defending Taiwan would be passively waiting for Chinese forces to illegally enter a protected zone, within which they could be attacked. The new U.S. Indo-Pacific Command leader, Admiral Sam Paparo, wants to apply this approach to defending Taiwan in a scenario he calls “Hellscape.” In it, U.S. surface and undersea drones would likely be based near Taiwan, perhaps in the Philippines or Japan. When the potential for an invasion rises, the drones would move themselves or be carried by larger uncrewed or crewed ships to the western coast of Taiwan to wait. Sea drones are well-suited to this role, thanks in part to the evolution of naval technologies and tactics over the past half century. Until World War II, submarines were the most lethal threat to ships. But since the Cold War, long-range subsonic, supersonic, and now hypersonic antiship missiles have commanded navy leaders’ attention. They’ve spent decades devising ways to protect their ships against such antiship missiles. Much less effort has gone into defending against torpedoes, mines—or sea drones. A dozen or more missiles might be needed to ensure that just one reaches a targeted ship, and even then, the damage may not be catastrophic. But a single surface or undersea drone could easily evade detection and explode at a ship’s waterline to sink it, because in this case, water pressure does most of the work. The level of autonomy available in most sea drones today is more than enough to attack ships in the Taiwan Strait. Details of U.S. military plans are classified, but a recent Hudson Institute report that I wrote with Dan Patt, proposes a possible approach. In it, a drone flotilla, consisting of about three dozen hunter-killer surface drones, two dozen uncrewed surface vessels carrying aerial drones, and three dozen autonomous undersea drones, would take up designated positions in a “kill box” adjacent to one of Taiwan’s western beaches if a Chinese invasion fleet had begun massing on the opposite side of the strait. Even if they were based in Japan or the Philippines, the drones could reach Taiwan within a day. Upon receiving a signal from operators remotely using Starlink or locally using a line-of-sight radio, the drones would act as a mobile minefield, attacking troop transports and their escorts inside Taiwan’s territorial waters. Widely available electro-optical and infrared sensors, coupled to recognition algorithms, would direct the drones to targets. Although communications with operators onshore would likely be jammed, the drones could coordinate their actions locally using line-of-sight Internet Protocol–based networks like Silvus or TTNT. For example, surface vessels could launch aerial drones that would attack the pilot houses and radars of ships, while surface and undersea drones strike ships at the waterline. The drones could also coordinate to ensure they do not all strike the same target and to prioritize the largest targets first. These kinds of simple collaborations are routine in today’s drones. Treating drones like mines reduces the complexity needed in their control systems and helps them comply with Pentagon rules for autonomous weapons. Rather than killer robots seeking out and destroying targets, the drones defending Taiwan would be passively waiting for Chinese forces to illegally enter a protected zone, within which they could be attacked. Like Russia’s Black Sea Fleet, the Chinese navy will develop countermeasures to sea drones, such as employing decoy ships, attacking drones from the air, or using minesweepers to move them away from the invasion fleet. To stay ahead, operators will need to continue innovating tactics and behaviors through frequent exercises and experiments, like those underway at U.S. Navy Unmanned Surface Vessel Squadron Three. (Like the USVDIV-1, it is a unit under the U.S. Navy’s Surface Development Squadron One.) Lessons from such exercises would be incorporated into the defending drones as part of their programming before a mission. The emergence of sea drones heralds a new era in naval warfare. After decades of focusing on increasingly lethal antiship missiles, navies now have to defend against capable and widely proliferating threats on, above, and below the water. And while sea drone swarms may be mainly a concern for coastal areas, these choke points are critical to the global economy and most nations’ security. For U.S. and allied fleets, especially, naval drones are a classic combination of threat and opportunity. As the Hellscape concept suggests, uncrewed vessels may be a solution to some of the most challenging and sweeping of modern naval scenarios for the Pentagon and its allies—and their adversaries. This article was updated on 10 July 2024. An earlier version stated that sea drones from Saronic Technologies are being purchased by the U.S. Department of Defense’s Defense Innovation Unit. This could not be publicly confirmed.

  • Notice to Membership
    by IEEE on 9. Jula 2024. at 18:00

    The IEEE Ethics and Member Conduct Committee (EMCC) received complaints through its Ethics Reporting Line against Mr. Mojtaba Sharif Zadeh [also known as Sharif Zadeh], a member of the IEEE. Following an EMCC investigation, a hearing board appointed by the IEEE Board of Directors found cause that Mr. Zadeh violated Section I and Section II, Subsection 8 of the IEEE Code of Ethics, and Section 4 and Section 5e of the IEEE Code of Conduct. The IEEE Board of Directors sustained these findings and imposed the sanction of Expulsion from IEEE membership on Mr. Zadeh, in accordance with IEEE Bylaw I-110.5. The IEEE Board of Directors also determined that this notification to the IEEE membership should be made.

  • Windows on Arm Is Here to Stay
    by Matthew S. Smith on 9. Jula 2024. at 12:00

    For the first time in history, there’s a good chance your next Windows laptop won’t have an x86 chip inside. Microsoft launched a new generation of AI-focused Windows laptops, called Copilot Plus PCs, in June of 2024. Controversy surrounding one of Microsoft’s key AI features made for a shaky debut, but it’s built on a sound foundation: Qualcomm’s Snapdragon X, an Arm system-on-a-chip that goes toe-to-toe with the best from AMD and Intel. Its debut represents a seismic shift in the Windows world. “It’s the most exciting time for PCs I’ve seen in my entire life,” said Anshel Sag, principal analyst at Moor Insights & Strategy. “We are getting better products from the industry because of competition, and ultimately that’s the best thing for everyone. It’s driving better products, prices, software, and the users win.” Qualcomm saves Microsoft’s awkward Copilot Plus PC launch Microsoft launched Windows for Arm in 2012 with the Surface 2-in-1, powered by Nvidia’s Tegra 3. It didn’t catch on. The original Surface was slow, buggy, and ran an Arm-only version of Windows called Windows RT. Microsoft retreated to x86 with the release of the Intel-powered Surface Pro. But Windows on Arm crawled forward with a new strategy. Instead of building a new version of Windows, Microsoft would bring Arm to the Windows everyone already used. After testing the waters in 2019 with the Qualcomm-powered Surface Pro X, Microsoft and Qualcomm committed fully. The Copilot Plus PC is the result. The launch didn’t go according to plan. Microsoft pitched Copilot Plus PCs on the strength of on-device AI performance, which is accelerated by a neural processing unit (NPU) included in Snapdragon X. But Microsoft bungled the software, going so far as to recall the headline feature (ironically known as Windows Recall) over security and privacy concerns. Sag said that cast a dark cloud over the launch. “Recalling Recall was a very weird overcorrection [...] I think they would’ve been fine shipping it disabled by default.” Microsoft’s recall of Recall brought the AI-enabled features of Copilot Plus PCs into question. Matthew Berman Snapdragon X, on the other hand, met and surpassed expectations. My review of the Surface Laptop 7, published on PC World, called it “a new era for Windows PCs.” The Surface Laptop 7, like other Copilot Plus PCs with Snapdragon X chips, benefits from an advantage Arm chips often hold when compared to x86: efficiency. Laptops with Snapdragon X can last over 20 hours on battery, yet meet or beat x86 alternatives in performance benchmarks. Tests have found Snapdragon X is up to 50 percent more efficient than comparable x86 chips in single-core workloads and 20 percent more efficient in multi-core. That translates not only to long battery life but also less heat, less fan noise, and better performance on battery power. I’m not alone in my praise. Leonard Lee, executive analyst and founder at neXt Curve, said the launch was “a big win” for Qualcomm, though he cautioned it’s too early to know how well Qualcomm-powered laptops have sold. Sag agreed. “My experience with the [Copilot Plus] PCs that I’ve used so far has been overwhelmingly positive,” he said. PC chip competition heats up The launch of Qualcomm’s Snapdragon X puts x86 at a disadvantage, at least for the moment. “We don’t know when Intel and AMD are going to come on board with Copilot Plus,” said Sag. “I think the best case scenario is the end of Q4 2024, but to be realistic, it’s probably going to be the beginning of next year.” Lee was more optimistic about x86’s reply. “My early thinking is Intel’s Lunar Lake [architecture] will be a stabilizing entry for Intel. And they have an ecosystem and software backing them up [...] Intel and AMD will fight tooth and nail to be relevant.” We’ll know more this time next year. Qualcomm is likely to announce more Arm chips for Windows’ PCs later this year. The Consumer Electronics Show, to be held in January of 2025, will see PC makers announce the next wave of Copilot Plus PCs; some with Arm chips, and some with x86. “The ground hasn’t settled yet. We’re going to have a couple years of this uncertainty.” —Anshel Sag, Moor Insights & Strategy And while Qualcomm was Microsoft’s partner for the Copilot Plus PC launch, other Arm chip makers are likely soon join in. MediaTek, Nvidia, Broadcom, and Samsung are among the big names that could design their own chips for Windows. Even x86 stalwarts may get into the Arm action; Sag points out AMD had an Arm chip in development, though it was never released. That’s exciting news for the PC. After decades of lopsided competition between Intel and AMD, the field could expand to include a half dozen chip makers (or more). Each chip will have an integrated GPU and NPU, too, further expanding what Windows’ PCs can accomplish. Developers, on the other hand, should expect trouble. Unlike Apple, Unlike Apple, which leaned on vertical integration to transition the Mac from x86 to its own Arm-based chips within just three years, Windows must support both—Arm and x86 PCs will exist side-by-side for years to come. Some developers may choose to rely on Windows’ emulator, which can run x86 software on Arm, but at reduced performance. Other developers—especially those building apps that require high performance—must take on the burden of optimizing software for both. “The ground hasn’t settled yet. We’re going to have a couple years of this uncertainty, and that Arm vendors will add more complexity, and competition, to the PC space than we’ve ever seen before,” said Sag.

  • New Fiber Optics Tech Smashes Data Rate Record
    by Margo Anderson on 8. Jula 2024. at 17:57

    An international team of researchers have smashed the world record for fiber optic communications through commercial-grade fiber. By broadening fiber’s communication bandwidth, the team has produced data rates four times as fast as existing commercial systems—and 33 percent better than the previous world record. The researchers’ success derives in part from their innovative use of optical amplifiers to boost signals across communications bands that conventional fiber optics technology today less-frequently uses. “It’s just more spectrum, more or less,” says Ben Puttnam, chief senior researcher at the National Institute of Information and Communications Technology (NICT) in Koganei, Japan. Puttnam says the researchers have built their communications hardware stack from optical amplifiers and other equipment developed, in part, by Nokia Bell Labs and the Hong Kong-based company Amonics. The assembled tech comprises six separate optical amplifiers that can squeeze optical signals through C-band wavelengths—the standard, workhorse communications band today—plus the less-popular U-, L-, S-, E-, and O-bands. (E- and O- bands are in the near-infrared; while S-band, C-band, L-, and O-bands are in what’s called short-wavelength infrared.) All together, the combination of O, E, S, C, L, and U bands enables the new technology to push a staggering 402 terabits per second (Tbps) through the kinds of fiber optic cables that are already in the ground and underneath the oceans. Which is impressive when compared to the competition. “The world’s best commercial systems are 100 terabits per second,” Puttnam says. “So we’re already doing about four times better.” Then, earlier this year, a team of researchers at Aston University in the Birmingham, England boasted what at the time was a record-setting 301 Tbps using much the same tech as the joint Japanese-British work—plus sharing a number of researchers between the two groups. Puttnam adds that if one wanted to push everything to its utmost limits, more bandwidth still could be squeezed out of existing cables. “If you really push everything, if you filled in all the gaps, and you had every channel the highest quality you can arrange, then probably 600 [Tbps] is the absolute limit,” Puttnam says. Getting to 402 Tbps—or 600 The “C” in C-band stands for “conventional”—and C-band is the conventional communications band in fiber optics in part because signals in this region of spectrum experience low signal loss from the fiber. “Fiber loss is higher as you move away from C-band in both directions,” Puttnam says. For instance, in much of the E-band and O-band, the same phenomenon that causes the sky to be blue and sunsets to be pink and red—Rayleigh scattering—makes the fiber less transparent for these regions of the infrared spectrum. And just as a foggy night sometimes requires fog lights, strong amplification of signals can be all the more significant when the fiber is less transparent than it is for the comparatively high-transparency C-band. “The world’s best commercial systems are 100 terabits per second. We’re already doing about four times better.” —Ben Puttnam, NICT Previous efforts to increase fiber optic bandwidths have often relied on what are called doped-fiber amplifiers (DFA)—in which an optical signal enters a modified stretch of fiber that’s been doped with a rare-earth ion like erbium. When a pump laser is shined into the fiber, the dopant elements in the fiber are pushed into higher energy states. That allows photons from the optical signal passing through the fiber to trigger a stimulated emission from the dopant elements. The result is a stronger (i.e. amplified) signal exiting the DFA fiber stretch than the one that entered it. Bismuth is the dopant of choice for the E band. But even bismuth DFAs are still just the least-bad option for boosting E-band signals. They can sometimes be inefficient, with higher noise rates, and more limited bandwidths. So Puttnam says the team developed a DFA that is co-doped with both bismuth and germanium. Then they added to the mix a kind of filter developed by Nokia that optimizes the amplifier performance and improves the signal quality. “So you can control the spectrum to compensate for the variations of the amplifier,” Puttnam says. Ultimately, he says, the amplifier can still do its job without overwhelming the original signal. Chigo Okonkwo, associate professor of electrical engineering at the Eindhoven Hendrik Casimir Institute at TU Eindhoven in the Netherlands, added that new optical amplifiers certainly need to be developed for other bands beyond the standard C-band. But too much amplification or amplification at the wrong place along a given cable line can also be like too much of a good thing. “If more photons... are injected into the fiber,” he says, “It changes the conditions in the fiber—a bit like the weather—affecting photons that come afterward, hence distorting the signals they carry.” Pushing Data Rates Into the World Puttnam stresses that the research team didn’t send one signal down through a commercial-grade fiber optic line that in itself contained 402 trillion bits per second of data. Rather, the team separately tested each individual region of spectrum and all the various amplifiers and filters on the line that would need to be implemented as part of the overall optical hardware stack. But what matters most, he says, is the inherent utility of this tech for existing commercial-grade fiber. “Adding more wavelength bands is something that you can do without digging up fibers,” Puttnam says. “You might ideally just change the ends, the transceiver—the transmitter and the receiver. Or maybe halfway, you’d want to change the amplifiers. And that’s the most you would [need to] do.” “Optical fiber networks must be intelligent as well as secure and resilient.” —Polina Bayvel, University College London According to Polina Bayvel, professor of optical communications and networks at University College London, those same transceivers that Puttnam referenced are a next-stage challenge for the field. “Transceivers need to be intelligent—akin to self-driving cars, able to sense and adapt to their environment, delivering capacity when and where it’s needed,” says Bayvel, who has collaborated with members of the team before but was unaffiliated with the present research. To that end, AI and machine learning (ML) techniques can help next-generation efforts to squeeze still more bits through fiber optic lines, she says. “AI/ML techniques may help detect and undo distortions and need to be developed in combination with high-capacity capabilities,” Bayvel adds. “We need to understand that optical fiber systems and networks are not just high-capacity plumbing. Optical fiber networks must be intelligent as well as secure and resilient.” The researchers detailed their findings earlier this year at the Optical Fiber Communication Conference 2024 in San Diego. UPDATE: 8 July 2024: This story was updated to include the perspectives of Chigo Okonkwo at TU Eindhoven. UPDATE 9 July 2024: This story was updated to correct the international affiliation of the researchers (not just Japan and U.K. as a previous draft stated) and the communications bands the experimental protocol used: portions of the O-, E-, S-, C-, L- and U-bands were all leveraged to achieve the 402 terabits per second breakthrough. (A previous version of this story stated, incorrectly, that the current technology used only the ESCL fiber optic communications bands.)

  • The Energy Transition Requires a Holistic Approach
    by Robert N. Charette on 7. Jula 2024. at 13:00

    Unless U.S. energy policy and industry practice is systemically shaped to intercept and exploit the exponential improvements in clean-energy technology and cost reductions now occurring, the United States could end up with the worst of all situations by 2040: a dystopian grid where energy costs are high and reliability is poor, decarbonization progress is stalled, and the economic gains that have been made over the last century are at risk. That’s a central premise of Energy 2040: Aligning Innovation, Economics and Decarbonization by Deepak Divan, professor and the founding director of the Center for Distributed Energy at the Georgia Institute of Technology, and recipient of the 2024 IEEE Medal in Power Engineering, and his coauthor Suresh Sharma, a former General Electric executive and the entrepreneur in residence at Georgia Tech. The book explores how new sources of energy are disrupting long-held beliefs and assumptions on how energy should be generated, transmitted and distributed. In the following interview IEEE Spectrum contributing editor Robert N. Charette talks with Divan about how to align economic imperatives and climate goals for sustainability and affordability. One of the fundamental themes of your book is that the technological learning curve that has resulted in the rapid reduction in the costs of renewable energy has been sustained for 50 to 70 years and shows no signs of slowing down. You also write that these declines were not predicted by experts in the field just two decades ago. What do you mean by the technological learning curve? What did you find in terms of cost reductions in different types of renewable energy as a result? And why were the experts so wrong in their predictions of renewable energy costs? Deepak Divan: The technological learning curve is at the heart of our book. We spent a lot of time in the beginning of the book going through the history of why we are where we are because it is important to understand the process and nuances of how we got here. It is quite complicated, but I’ll try to simplify it. Deepak Divan We start at a place where science lagged technology and the market by a significant amount in the early years of the power industry. In other words, the processes of taking technology to market through innovation, through tinkering, through entrepreneurs who were willing to invest, helped create the underlying structure of today’s utility industry. When the electricity grid was established, it was the Wild West, with every entrepreneur trying to get ahead of the others with their own proprietary solutions. However, it soon became clear that the grid, which was not just a single device but a physically coupled network of a large number of devices, needed to be coordinated and controlled as a whole—very different from most previous technological innovations. Everybody’s appliances needed to work with the same voltage and the same frequency, for instance. So, electricity providers were forced to make everything work seamlessly—challenging in a world before microprocessors and power electronics. Yet at the same time, the early electricity providers also focused on where the money was, so they ended up targeting those pieces of the market that had best return on their investments. As a result, big, broad swaths of the country, typically rural, were being left in the dark. This helped create the Public Utility Holding Company Act of 1934 that forced more regulation on the electricity industry. It also promised utilities better and more stable economic returns, but in exchange for providing universal access, and so we end up getting the grid that we have right now. I keep thinking that Elon Musk should not be worrying so much about autonomous cars today. Give me an autonomous inverter first.—Deepak Divan However, industry regulations also strongly influenced the way electricity providers thought. With the utility industry now regulated, it was not possible to bring innovation to market very easily. Reliability was the most important objective and any new technological innovations that might reduce reliability were frowned upon. As a result, it took 10 to 20 years to bring new technologies to market. So, the electricity industry went from a fast-moving, risk-taking one to an industry that was very, very slow moving, very risk averse. That was fine as long as technological innovation was also moving slowly. Over the past two decades, however, something radically changed. Traditional learning curves, where one gained experience over time and the product or service cost went down a modest amount until the next S-shaped learning curve began, started to disappear. Instead, the learning curve across many energy-related technologies and their resultant cost reductions started to happen without much notice over decades seemingly without limit, with few indications when they will ever saturate. We’ve seen this in microelectronics ad nauseam, for example. We have also seen it reflected in the photovoltaics space, where the learning curves began in the early 1970s. Since that time, there have been hundreds of technologies that have intersected and interconnected to create a 23 percent reduction in price for each doubling in sales volume with no signs that it’s going to slow down. The same kind of curve is occurring in the battery space because, again, it is micromaterial-based and multiple new smart materials all coming together to give you both more kilowatts and kilowatt-hours. The battery market is now tasting success; it is attracting huge investments, again, with no signs of slowing down. Why did no one in the energy industry see this coming? Divan: If we go back to around the year 2000, at that point, solar LCOE (levelized cost of energy) was $850 per megawatt-hour, and batteries were $1,200/KWh. There was nobody in their right mind who thought that that would ever become competitive with gas and coal sitting at around $35/KWh and $50/KWh. No one believed that the learning rates in solar or batteries, for example, could be sustained. Everyone in the industry thought that the technology was gimmicky and was not really going to be able to scale. After all, solar panels are small little things. How could you compete with a 500-megawatt gas plant? Additionally, the utilities all used similar 20-year integrated resource planning cycles. So, they were already making investments in terms of what needed to be done and there was not a consultancy in the world who was willing to advise them to say stop everything you’re doing and let’s start moving towards solar. There was no rational basis for that. The energy industry also believed they had so much economic and policy clout, they could hold off any threat from renewable energy forever. I do not think the transition to renewables and EVs can be stopped, but I think it can be made extremely messy. —Deepak Divan A former CEO for PJM, the biggest grid operator in the United States, told me that even in 2010 there was not a single CEO of a grid operator, electric utility, automotive or oil company who thought that electric vehicles, solar power or batteries were going to be cost competitive any time soon. But by 2015, new energy companies were disrupting energy incumbents’ long-held assumptions. This was reflected by an astounding 97.5 percent reduction in the cost of solar from 2000 to 2022, and this is installed cost! And similarly with batteries, there has been a 92 percent cost reduction over the same period that is just continuing because there are so many new technologies being brought into play. As to why the biggest companies in the world that are responsible for a huge part of global GDP, have the smartest people in the world and are advised by the smartest consultants in the world, could not see this coming is a fundamental question that we have asked in the book. One of the implications you discuss is that the distributed energy resources, or DERs, like solar power, windmills and large-scale energy-storage systems are going to change the electric grid from a synchronous generator and inertia-driven system to an inverter-based resource (IBR) rich grid where grid voltage and frequency are not regulated by inertial sources. Can you explain the difference, why and what needs to happen both from a technology perspective to move to a decarbonized IBR grid? Divan: Getting to an inverter-based grid is one of the things that the industry is struggling with on the technology side. Fundamentally, the existing grid is electromechanical in nature. There are these big, rotating, energy-generating turbine-driven synchronous machines, and over 100 years we have figured out how to make them work to make the grid reliable. All the simplifications and efficiencies, all the standardizations and designs and synchronous generators that were needed have been figured out and now there is a system that works reasonably well. The grid that has been built in the United States has been called the largest machine ever built, with all these rotating machines possessing huge amounts of rotational inertia, all rotating together in lockstep because of the way synchronous machines operate. RyanJLane/Getty Images When even a small disturbance occurs anywhere on the grid, all of them continue to operate locked together and to share the power delivered, with the ability to clear any faults as they occur on the system. The entire system is structured around this model. While it is often called a smart grid, there’s nothing smart about it. It’s an extremely good grid but it’s really a passive grid. All the smarts are sitting 15 minutes away at the operator level. So, for 15 minutes, the system has to keep operating until the next command is received. This enormous machine has several interesting characteristics that make it work well. One is that the grid has a lot of damping built into it. Anytime there is a deviation because of a disturbance on the system, there’s a restorative torque that automatically occurs on it. Another characteristic is that it is usually thought that frequency is the universal parameter on the system, since all the generators essentially use a power-frequency droop principle to share power equally. However, the problem is that in the synchronous generator world, frequency command is a DC quantity, while the three-phase AC voltages are generated and locked in by the machines’ action itself, not by control action. Now, as synchronous generators are replaced with inverters, you don’t have any intrinsic rotation or inertia in the system. We don’t have any of the attributes of damping that are automatically built into it. Further, there are now inverters with DSPs [digital signal processors] and FPGAs [field-programmable gate arrays] which allow the measurement of the grid voltage and to act very, very quickly. For the first time in our history, decarbonized climate-friendly solutions are also lower cost than traditional fossil-energy-based solutions. For the first time ever, what is good for our wallet is also good for the planet! —Deepak Divan In the early years and all the way until very recently, we only built what we call grid-following inverters. Essentially, the voltage of the grid was taken as given and power was pushed against it. The inverter followed the grid and power could be dispatched per utility command, which worked fine. This has allowed us to scale IBRs in many locations around the world. The difficulty is as one gets to high penetration of inverter-based resources, the grid is no longer being formed nicely, and so the system can become unstable. Now there is a need to start thinking about how the grid is going to be formed when we have an inverter-dominant grid. The issue is that one does not have that rotating machine, one doesn’t have that restorative torque, and one doesn’t have the system damping. None of those things are there. Each inverter thinks it is very smart and it’s going to try to form the voltage based on local information. However, it is also going to have to interact with what another inverter is trying to do to form voltage and what another inverter is doing, and so on. This becomes a problem. So as these inverters interact with each other, it’s often hard to keep them stable. While we have been able to demonstrate grid-forming inverters and every manufacturer now claims to have one, we do not exactly know what a grid-forming inverter should do, especially when done at scale, to ensure that they do not interact with each other, particularly when millions of inverters are deployed. This creates a challenge. There is also the concern that each of these inverters is made by a different manufacturer. Some of them were made 20 years back, some were made 10 years back some and these now need to be compatible with what will be made in the next 10 years. They are no agreed standards. Standards are lagging by 10 years or more. The question is what does one do, if it takes you 10 years to get a new standard out, and given that the rate of solar deployment is so high that in that time some 1,000 gigawatts of PV solar will be deployed, but none of it will be compliant with the future, as yet unknown, standard? How do you also stabilize the grid in this environment? Divan: The utilities today have grown up without having to worry about any of these issues. They just focused on how to restore power, how to connect this to that, how to manage the workforce, and so on. Not this dynamic beast which they have few skills in dealing with. In fact, most big electric utilities have few people in their workforce who are skilled in power electronics, because the old system did not need it. These are very complex issues and part of the challenge is that it is a different operational paradigm than today. We do not have these fundamental issues resolved. The important question, I think, and part of the problem is that nobody can stand in public and say, “Hey, there’s a problem here!” I keep thinking that Elon Musk should not be worrying so much about autonomous cars today. Give me an autonomous inverter first. That is a much, much more important priority in the near term. In the book, you were careful to also lay out factors that could derail your energy vision for 2040. Could you discuss a few of them, and what might be done to avoid or minimize them? Divan: I do not think the transition to renewables and EVs can be stopped, but I think it can be made extremely messy. Major energy transformations have taken 50 to 70 years, and they have been very messy from a regulatory standpoint. People are pushing back against going to renewables, but I do not think they can win because at the end of the day, everybody is going to respond to the economics and functionality of inexpensive renewables and new holistic solutions. Even if we in the United States do not do it because of the politics and incumbent resistance, the Chinese and others are going to continue to move the technology along and to drive the prices down. And so you know, you’re going to at some point say, oh, ****, I think we have to adopt this new stuff, because it’s going to seep into widespread use. By then, I am concerned that we will have been left behind. Nobody can argue with economics of renewables in the future; it is going to drive everything. However, if you do not think about the economics and government policies properly together, they will drive bad outcomes. —Deepak Divan Another issue that could make things messy is that the utilities do not have the ability to change easily. They must meet their reliability requirements in the near term, which becomes problematic when all these new technologies are coming in. They are not going to absorb these technologies easily. In addition, the energy load is moving in. Data centers, especially those for AI, are coming online, as well as electric-vehicle charging, heat pumps and green hydrogen. How do you meet those requirements? It is tempting to say, “Let’s go back to the old days and fire up the gas and coal plants.” While that is not the answer, that is something that easily could happen. The point I am trying to make is that I do not believe this energy transition can be stopped, but it can be made extremely expensive. It can be made extremely messy and then we will have lost the climate battle at the same time. But it does not have to be so! For the first time in our history, decarbonized climate-friendly solutions are also lower cost than traditional fossil-energy-based solutions. For the first time ever, what is good for our wallet is also good for the planet! Nobody is laying the difficulties out. Nobody. The hope with writing this book was to start this conversation because we are not seeing anybody addressing these issues holistically. Unfortunately, most people are unable to act on something that has a long-term benefit but is more expensive in the near to midterm. They will only act in the short term. So, you have to give them a short-term reason for doing something by making it the attractive thing to do financially. Viaframe/Getty Images This is very important in my mind. Nobody can argue with economics of renewables in the future; it is going to drive everything. However, if you do not think about the economics and government policies properly together, they will drive bad outcomes. Who do you hope will read your book, and what are the two or three fundamental messages they should take away and, more importantly, act on and when? Divan: I think the audience is everybody who is interested in energy in general, including researchers, engineers, policymakers, investors, entrepreneurs and students. People are interested in the topics we raise. Every time I go into a room, I have six people approach me and want to talk about it. They are reading something in the news, and they have only a narrow sliver of information. They are not able to connect all the dots together. I think part of the problem is that this field is very complex and very nuanced, and when you try to simplify it, you can get to the wrong conclusions. My objective for writing the book was that we really do not hear this line of conversation in the industry. In other words, a holistic view of the problems confronting the industry is required because everything you do intersects with something else. The utility industry does not fully understand this. When I go to the IEEE Power and Energy Society general meeting, I go to every conference room and I ask a question about the dynamics and scaling of IBRs and distributed systems. Nobody has an answer. This is scary. I mean, this whole industry is there, and they’re absorbing gigawatts after gigawatts of renewable energy and don’t have any idea what the hell is going happen as we move to a distributed energy resources dominant zero-carbon grid (which EPRI has also set as the target for 2050). Again, oversimplifying is going to lead us to the wrong place, not looking holistically is going to lead us to the wrong place. We have an opportunity where we have alignment between economics and decarbonization for the first time. Let’s not blow it. This article was updated on 10 July 2024 to correct the units in solar LCOE in 2000 to US $850 per megawatt-hour instead of per kilowatt-hour.

  • How Good Is ChatGPT at Coding, Really?
    by Michelle Hampson on 6. Jula 2024. at 12:00

    This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore. Programmers have spent decades writing code for AI models, and now, in a full circle moment, AI is being used to write code. But how does an AI code generator compare to a human programmer? A study published in the June issue of IEEE Transactions on Software Engineering evaluated the code produced by OpenAI’s ChatGPT in terms of functionality, complexity and security. The results show that ChatGPT has an extremely broad range of success when it comes to producing functional code—with a success rate ranging from anywhere as poor as 0.66 percent and as good as 89 percent—depending on the difficulty of the task, the programming language, and a number of other factors. While in some cases the AI generator could produce better code than humans, the analysis also reveals some security concerns with AI-generated code. Yutian Tang is a lecturer at the University of Glasgow who was involved in the study. He notes that AI-based code generation could provide some advantages in terms of enhancing productivity and automating software development tasks—but it’s important to understand the strengths and limitations of these models. “By conducting a comprehensive analysis, we can uncover potential issues and limitations that arise in the ChatGPT-based code generation... [and] improve generation techniques,” Tang explains. To explore these limitations in more detail, his team sought to test GPT-3.5’s ability to address 728 coding problems from the LeetCode testing platform in five programming languages: C, C++, Java, JavaScript, and Python. “A reasonable hypothesis for why ChatGPT can do better with algorithm problems before 2021 is that these problems are frequently seen in the training dataset.” —Yutian Tang, University of Glasgow Overall, ChatGPT was fairly good at solving problems in the different coding languages—but especially when attempting to solve coding problems that existed on LeetCode before 2021. For instance, it was able to produce functional code for easy, medium, and hard problems with success rates of about 89, 71, and 40 percent, respectively. “However, when it comes to the algorithm problems after 2021, ChatGPT’s ability to generate functionally correct code is affected. It sometimes fails to understand the meaning of questions, even for easy level problems,” Tang notes. For example, ChatGPT’s ability to produce functional code for “easy” coding problems dropped from 89 percent to 52 percent after 2021. And its ability to generate functional code for “hard” problems dropped from 40 percent to 0.66 percent after this time as well. “A reasonable hypothesis for why ChatGPT can do better with algorithm problems before 2021 is that these problems are frequently seen in the training dataset,” Tang says. Essentially, as coding evolves, ChatGPT has not been exposed yet to new problems and solutions. It lacks the critical thinking skills of a human and can only address problems it has previously encountered. This could explain why it is so much better at addressing older coding problems than newer ones. “ChatGPT may generate incorrect code because it does not understand the meaning of algorithm problems.” —Yutian Tang, University of Glasgow Interestingly, ChatGPT is able to generate code with smaller runtime and memory overheads than at least 50 percent of human solutions to the same LeetCode problems. The researchers also explored the ability of ChatGPT to fix its own coding errors after receiving feedback from LeetCode. They randomly selected 50 coding scenarios where ChatGPT initially generated incorrect coding, either because it didn’t understand the content or problem at hand. While ChatGPT was good at fixing compiling errors, it generally was not good at correcting its own mistakes. “ChatGPT may generate incorrect code because it does not understand the meaning of algorithm problems, thus, this simple error feedback information is not enough,” Tang explains. The researchers also found that ChatGPT-generated code did have a fair amount of vulnerabilities, such as a missing null test, but many of these were easily fixable. Their results also show that generated code in C was the most complex, followed by C++ and Python, which has a similar complexity to the human-written code. Tangs says, based on these results, it’s important that developers using ChatGPT provide additional information to help ChatGPT better understand problems or avoid vulnerabilities. “For example, when encountering more complex programming problems, developers can provide relevant knowledge as much as possible, and tell ChatGPT in the prompt which potential vulnerabilities to be aware of,” Tang says.

  • Video Friday: Humanoids Building BMWs
    by Evan Ackerman on 5. Jula 2024. at 19:51

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDS IROS 2024: 14–18 October 2024, ABU DHABI, UAE ICSR 2024: 23–26 October 2024, ODENSE, DENMARK Cybathlon 2024: 25–27 October 2024, ZURICH Enjoy today’s videos! Figure is making progress toward a humanoid robot that can do something useful, but keep in mind that the “full use case” here is not one continuous shot. [ Figure ] Can this robot survive a 1-meter drop? Spoiler alert: it cannot. [ WVUIRL ] One of those things that’s a lot harder for robots than it probably looks. This is a demo of hammering a nail. The instantaneous rebound force from the hammer is absorbed through a combination of the elasticity of the rubber material securing the hammer, the deflection in torque sensors and harmonic gears, back-drivability, and impedance control. This allows the nail to be driven with a certain amount of force. [ Tokyo Robotics ] Although bin packing has been a key benchmark task for robotic manipulation, the community has mainly focused on the placement of rigid rectilinear objects within the container. We address this by presenting a soft robotic hand that combines vision, motor-based proprioception, and soft tactile sensors to identify, sort, and pack a stream of unknown objects. [ MIT CSAIL ] Status Update: Extending traditional visual servo and compliant control by integrating the latest reinforcement and imitation learning control methodologies, UBTECH gradually trains the embodied intelligence-based “cerebellum” of its humanoid robot Walker S for diverse industrial manipulation tasks. [ UBTECH ] If you’re gonna ask a robot to stack bread, better make it flat. [ FANUC ] Cassie has to be one of the most distinctive sounding legged robots there is. [ Paper ] Twice the robots are by definition twice as capable, right...? [ Pollen Robotics ] The Robotic Systems Lab participated in the Advanced Industrial Robotic Applications (AIRA) Challenge at the ACHEMA 2024 process industry trade show, where teams demonstrated their teleoperated robotic solutions for industrial inspection tasks. We competed with the ALMA legged manipulator robot, teleoperated using a second robot arm in a leader-follower configuration, placing us in third place for the competition. [ ETHZ RSL ] This is apparently “peak demand” in a single market for Wing delivery drones. [ Wing ] Using a new type of surgical intervention and neuroprosthetic interface, MIT researchers, in collaboration with colleagues from Brigham and Women’s Hospital, have shown that a natural walking gait is achievable using a prosthetic leg fully driven by the body’s own nervous system. The surgical amputation procedure reconnects muscles in the residual limb, which allows patients to receive “proprioceptive” feedback about where their prosthetic limb is in space. [ MIT ] Coal mining in Forest of Dean (UK) is such a difficult and challenging job. Going into the mine as human is sometimes almost impossible. We did it with our robot while inspecting the mine with our partners (Forestry England) and the local miners! [ UCL RPL ] Chill. [ ABB ] Would you tango with a robot? Inviting us into the fascinating world of dancing machines, robot choreographer Catie Cuan highlights why teaching robots to move with grace, intention and emotion is essential to creating AI-powered machines we will want to welcome into our daily lives. [ TED ]

  • IEEE Team Training Programs Elevate Wireless Communication Skills
    by Natalie Apadula on 5. Jula 2024. at 13:00

    The field of wireless communication is constantly evolving, and engineers need to be aware of the latest improvements and requirements. To address their needs, the IEEE Communications Society is offering two exclusive training programs for individuals and technical teams. The online Intensive Wireless Communications and Advanced Topics in Wireless course series are taught by experts in real time. Through lectures that include practical use cases and case studies, participants acquire knowledge that can be applied in the workplace. During the interactive, live courses, learners have the opportunity to engage directly with industry expert instructors and get answers to their questions in real time. Recordings of the courses are available to facilitate group discussions of the materials and deepen understanding of concepts. Copies of the instructors’ slides are shared with participants, providing an ongoing resource for future reference. The benefits of training as a team “A team taking the courses together can benefit from discussing examples from the lectures and the practice questions,” instructor Alan Bensky says. “Attendees can also help each other better understand more difficult topics.” Bensky, who has more than 30 years of industry experience, teaches the Intensive Wireless Communications series. Panteleimon Balis, an Advanced Topics in Wireless instructor, says taking the courses together as a team “fosters an aligned development of knowledge that enhances communication and collaboration within the team, leading to more effective problem-solving and decision-making.” Balis is a radio access network specialist who provides training on mobile and wireless communications technologies. “The collective development of skill sets enables the team to apply the assimilated knowledge to real-world projects, driving innovation and efficiency within the organization,” he says. “Ultimately, attending these courses as a team not only strengthens individual competencies but also reinforces team cohesion and performance, benefiting the organization as a whole.” Practical use cases to apply on the job The following topics are covered in the Intensive Wireless Communications course, which is scheduled to be held in September and October: Fundamentals of wireless communication. Network and service architecture. Cellular networks. Noncellular wireless systems. Several practical use cases are shared in the courses. Bensky notes, for example, that those working on Wi-Fi devices or network deployment likely will find the section on IEEE 802.11 especially useful because it covers the capabilities of the different amendments, particularly regarding data rate calculation and presentation of achievable rates. “Attending these courses as a team not only strengthens individual competencies but also reinforces team cohesion and performance, benefiting the organization as a whole.” —Panteleimon Balis The Advanced Topics in Wireless series, taught in October and November, includes these classes: 5G RAN and Core Network: Architecture, Technology Enablers, and Implementation Aspects. O-RAN: Disrupting the Radio Access Network through Openness and Innovation. Machine Type Communications in 5G and Beyond. The inclusion of use cases, Balis says, brings significant value to the learning experience and helps with bridging the gap between theory and practice. In the O-RAN (open radio access network) module, for example, case studies analyze the pros and cons of early deployments in Japan and the United States. As noted by the IEEE Standards Association, the key concept of O-RAN is opening the protocols and interfaces among the various building blocks—radios, hardware, and software—in the RAN. The Advanced Topics in Wireless courses are scheduled to begin after the Intensive Wireless Communications series concludes. More details about courses are available online, where you can learn how to offer the series to your team.

  • Detect Migrating Birds With a Plastic Dish and a Cheap Microphone
    by David Schneider on 4. Jula 2024. at 15:00

    Birding is booming. You may realize your local nature spots are especially busy during seasonal migrations, when birds move between their summer and winter grounds. Species that you had been noticing disappear may have been replaced by ones that hadn’t been there before. Or you may have seen migrating birds on the wing—say, a flock of geese flying in their famous V-formation. Even if you’re not a dedicated birder, you’ve probably made such observations throughout your life. So it might come as a surprise to learn that you’ve been missing out on most of this action, which takes place at night. But, as I discovered, with some simple electronics and the right software, you can identify nocturnal migrators with ease! Birds migrate at night for a few reasons. One is that it helps them to avoid predators. Also, it allows them to use the stars for navigation. A less obvious reason is that traveling at night helps these birds avoid heat stress. And the night air tends to be less turbulent, making flying easier. These nighttime flights are largely invisible. If you’re lucky, you might view telltale silhouettes by training a telescope on the moon. But during the Second World War, scientists realized that they could readily detect migrating birds using radar. Since then, ornithologists’ radar studies, particularly those that use modern weather radar, have proved immensely successful in showing where and when birds migrate at night. Radar echoes cannot, however, identify species. But there is another technique that can: recording the calls that birds make during their nocturnal travels. Incoming sounds are amplified using a parabolic dish made from a plastic bird-feeder cover [top]. A microphone attached at the focal point of the dish is connected to a preamplifier [middle left], which in turns feeds an external sound card [middle right], which connects to a host computer via USB. A large gel-acid battery [bottom] provides plenty of power for long-term monitoring. James Provost When ornithologist Richard Graber and electrical engineer William Cochrane made the first systematic recordings of nocturnally migrating birds in 1957, they used a microphone attached to a 2-meter-wide upward-facing parabolic dish. But you can get by today with a far more modest setup. You could, for example, reproduce the gear designed by Bill Evans. On his website he sells a microphone and preamp for this purpose along with guidance on how to package the equipment so that it will hold up to the elements. I explored a different approach, though, one that seemed easier and cheaper. Evans’s preamp is designed to be insensitive to low frequencies, as these aren’t of interest when you’re recording bird calls. I figured that this feature wasn’t that important, so after testing a few inexpensive options for the microphone and preamplifier, I chose one on Amazon for just US $9. This circuit uses the venerable NE5532, a low-noise, low-distortion dual op-amp design that’s been used in professional recording equipment since 1979. To make it directional, I unsoldered the condenser microphone from the board, attached a short length of audio cable to it, and mounted it at the focal point of an 8-inch-diameter parabolic dish—or, well, a reasonable approximation of a parabolic dish, as it’s actually a rain guard for bird feeders. You could also purchase a 16-inch-diameter one, but the 8-inch dish served me admirably. I found the focal point of this dish through trial and error and ran the output of the preamp into an old Creative Labs Sound Blaster external sound card, which had been collecting dust on my shelf. I suspect that just about any external sound card would work fine for this application, including the $34 StarTac model that I use to good effect to monitor solar flares. To power the preamp, I used a 7-ampere-hour, 12-volt gel-cell battery, which is overkill. But the big battery would allow me to leave the thing running for weeks at a time. Following Evans’s advice, I housed everything in a 2-gallon paint bucket, stretching some plastic wrap over the top to keep rain out. I placed my bucket o’ electronics on the roof of my porch, running a USB cable from the sound card, out the side of the bucket, and into my office through a window. Then I plugged it into a Windows laptop onto which I had installed Raven Lite, acoustic-spectrogram software made available for free by the Cornell Lab of Ornithology. Using Raven Lite to compute spectrograms showed just how sensitive this arrangement is. I could easily view, for example, the effect of completely inaudible sounds created by rubbing my thumb and forefinger together a couple of meters away from the microphone. With the gear in place outside, I started recording at night, beginning in early March, arranging the Raven Lite software to record a series of 1-hour sound files. The great thing about Raven Lite is that you can review hours of recordings just by scanning through spectrograms visually. Checking out a 1-hour-long sound file takes just a few minutes. This audiogram reveals the presence of bird calls. I uploaded the data to a server maintained by Cornell University that then uses AI to quickly identify the species. James Provost These files, of course, picked up a lot of sounds: rumbling traffic, screeching cats, wailing sirens, and who knows what else. But once you’ve looked at spectrograms for a while, it becomes easy to pick out bird chirps. There is no shortage of local birds chirping during the day, but after sunset their ornithological cacophony abates, returning again some time before dawn. The interval in between is where I went hunting for the sound of migrating birds. And after 10 days or so, I found my quarry: chirping that started shortly after midnight, rising in volume for a few minutes before fading away. Using Audacity, a free audio editor, I extracted a few seconds of the loudest chirping and uploaded the file to Birdnet, where the good folks at the Cornell Lab of Ornithology provide a tool for identifying bird calls. It indicated that the species I had recorded was the killdeer, a type of bird found throughout the continental United States, some populations of which are migratory. Additional nights of recording and scanning spectrograms turned up other sounds that appeared to be from other kinds of birds on the move, including such migratory species as the dark-eyed junco and Kentucky warbler. I’ve never been an accomplished bird watcher: I’d be hard pressed to distinguish a sparrow from a wren. So it’s rather satisfying to discover that, with some simple electronics and the right software, I am able to pick out different species of migratory birds flying high overhead through the inky darkness of the night.

  • Autonomous Vehicles Can Make All Cars More Efficient
    by Willie D. Jones on 3. Jula 2024. at 11:00

    Autonomous vehicles have been highly anticipated because of the possibility that they will greatly reduce or perhaps eliminate the collisions that cause more than a million deaths each year. But safety isn’t the only potential benefit self-driving cars can offer: Teams of researchers around the world are showing that autonomous vehicles can also drive more efficiently than humans can. A U.S. Department of Energy program called NEXTCAR (Next-Generation Energy Technologies for Connected and Automated On-Road Vehicles), for example, is betting that a mix of new smart-vehicle technologies can boost fuel efficiency by as much as 30 percent. As part of the NEXTCAR program, San Antonio, Texas–based Southwest Research Institute (SwRI) showcased advances in autonomous vehicle technology that will improve vehicles’ fuel economy—including the fuel efficiency of nonautonomous automobiles that just so happen to be in traffic with autonomous ones. The demonstration was held at the ARPA-E Energy Inovation Summit in Dallas in late May. Making an Efficient Autonomous Vehicle The SwRI team retrofitted a 2021 Honda Clarity hybrid with basic autonomous features such as perception and localization. On the day of the summit, they drove the vehicle along a route encircling the parking lot of the convention center where the summit was held. SWRI’s Ranger localization system, which the researchers installed on the Honda, has a downward-facing camera that captures images of the ground. By initially mapping the driving surface, Ranger can later localize the vehicle with centimeter-level accuracy, using the ground’s unique “fingerprint” combined with GPS data. This precision ensures the vehicle drives with exceptional control. “It’s almost like riding on rails,” says Stas Gankov, a researcher in SwRI’s power-train engineering group. For this project, his group collaborated with other divisions at the institute, such as the intelligence-systems division, which developed the autonomy software stack added to the Honda Clarity. Just as important, however, was the addition of an ecodriving module, a key innovation by SwRI. The ecomode determines the most economical driving speed by considering various factors such as traffic lights and surrounding vehicles. This system employs predictive control algorithms to help solve a tricky optimization problem: How can cars minimize energy consumption while maintaining efficient traffic flow? SwRI’s ecomode aims to reduce unnecessary acceleration and deceleration in order to optimize energy usage without impeding other vehicles. “Autonomous vehicles operating in ecomode influence the driving behavior of all the cars behind them.” —Stas Gankov, Southwest Research Institute To illustrate how the technology works, the team installed a traffic signal along the demonstration pathway. Gankov says an actual traffic-light timer from a traffic-signal cabinet was connected to a TV screen, providing a visual for attendees. A dedicated short range communications (DRSC) radio was also attached, broadcasting the signal’s phase and timing information to the vehicle. This setup enabled the vehicle to anticipate the traffic light’s actions far more accurately than a human driver could.For instance, Gankov says, if the Honda Clarity was approaching a red light that was about to turn green, it would know the light was due to change and so avoid wasting energy by braking and then accelerating again. Conversely, if the car was approaching the signal as it was about to turn from green to yellow to red, the vehicle would release the accelerator and let friction slow it to a crawl, avoiding unnecessary acceleration in an attempt to beat the light. These autonomous driving strategies can lead to significant energy savings, benefiting not just the autonomous vehicles themselves, but also the entire traffic ecosystem. “In a regular traffic situation, autonomous vehicles operating in ecomode influence the driving behavior of all the cars behind them,” says Gankov. “The result is that even vehicles with Level 0 autonomy use fuel more sparingly.” The Grand Vehicle Energy Plan SwRI has been a participant in the NEXTCAR initiative since 2017. The program’s initial phase involved 11 teams, including SwRI, Michigan Technological University, Ohio State University, and the University of California, Berkeley. SwRI, in collaboration with the University of Michigan, focused on optimizing a Toyota Prius Prime, already known for its fuel efficiency, to achieve a 20 percent improvement in energy usage through optimization algorithms and wireless communicating with its surroundings. This was accomplished without modifying the Toyota’s power train or compromising its emissions. The team utilized power split optimization, balancing the use of the gas engine and battery-propulsion system for maximum efficiency. Building on the success of NEXTCAR’s first phase, the program entered its second phase in 2021, with just SwRI, Michigan Tech, Ohio State, and UC Berkeley remaining. The focus of NEXTCAR 2 has been determining how much automation could further enhance energy efficiency. Gankov explains that while the first phase demonstrated a 20 percent energy-efficiency improvement over a baseline 2016 or 2017 model-year vehicle with no autonomous driving capabilities, through the addition of vehicle-to-everything connectivity alone, the second phase is exploring the potential for an additional 10 percent improvement by incorporating autonomous features. Gankov says SwRI initially intended to partner with Honda for NEXTCAR’s second phase, but when contracting issues arose, the nonprofit proceeded independently. Utilizing an autonomy platform developed by SwRI’s intelligence-systems division, the NEXTCAR team equipped the Honda Clarity with what amounted to Level 4 autonomy in a box. This autonomy system features a drive-by-wire system, allowing the vehicle to automatically adjust its speed and steering based on inputs from the autonomy software stack and the ecodriving module. This ensures the vehicle prioritizes safety while optimizing for energy efficiency. Employing techniques like efficient highway merging were key strategies in their approach to making the most of each tank of fuel or battery charge. “For example, in heavy traffic on the highway, calculating the most optimal way to merge onto the highway without negatively affecting the energy efficiency of the vehicles already on the highway is crucial,” Gankov noted. As NEXTCAR 2 enters its final year, the demonstration at the ARPA-E Summit served as a testament to the progress made in autonomous-vehicle technology and its potential to dramatically improve energy efficiency in transportation.

  • High Schooler Brings IEEE Mobile Disaster-Relief Tech to Campus
    by Kathy Pretz on 2. Jula 2024. at 18:00

    Unlike most people who encounter the IEEE-USA MOVE (Mobile Outreach VEhicle) emergency relief truck, Ananya Yanduru wasn’t a survivor of a natural disaster who needed to charge her cellphone or access the Internet. Instead, the 16-year-old got a guided tour of the truck on the grounds of her high school. She had requested MOVE visit Canyon Crest Academy, in San Diego, so she and her classmates could learn about the technology it houses. The vehicle is equipped with satellite Internet access and IP phone service. MOVE can charge up to 100 cellphones simultaneously. It also has a mobile television for tracking storms, as well as radios for communications. A generator and three solar panels on the roof power the technology. When it’s not deployed to help in disaster recovery, the vehicle stops at venues so its team can provide guided tours, educating people about ways technology helps during disasters. Yanduru spotted the truck in June 2023 when it was parked at the San Diego Convention Center. She was there to accompany her father, an IEEE senior member, to a conference. “I saw that the truck had traveled across the United States to help with hurricanes, be there for disaster relief, and work with the American Red Cross,” she says. “I thought that was a big deal.” MOVE’s volunteers often coordinate their disaster-relief efforts with the Red Cross. Tours were over for the day, but that didn’t stop her. She was so determined to explore the vehicle that as soon as she got home she went to the MOVE website and requested a visit to her school. It showed up a few weeks later. Yanduru was most interested in its communications system. She was impressed that the vehicle had its own Wi-Fi network, she says. “I really liked how the IEEE-USA MOVE truck is able to establish such a strong communication system in a disaster area,” she says. “The radio engineering communication part really clicked with me.” The vehicle was a big hit at her school, Yanduru says. More than 70 students and teachers toured it. Some of the students brought their family and friends. Qualcomm’s devices inspired an interest in engineering Yanduru is no stranger to engineering or technology. She comes from a family of engineers and is a member of her school’s radio engineering, coding, and 3D printing clubs. Her father, electrical engineer Naveen Yanduru, is vice president and general manager of Renesas Electronics, in San Diego. Her mother, electrical engineer Arunasree Parsi, has worked as a computer-aided design engineer for Qualcomm and other semiconductor companies. Parsi is now president and CEO of Kaleidochip, also in San Diego. “I really liked how the IEEE-USA MOVE truck is able to establish such a strong communication system in a disaster area.” Yanduru says her mother sparked her passion for technology. When the girl was a youngster, the two visited the Qualcomm Museum, which displays the company’s modems, chips, tracking systems, and other products. “I got interested in engineering from looking at those devices and seeing how engineering could be applied to so many different aspects of the world and used in so many fields,” she says. Her parents support her interest in engineering because “it’s something that we can talk about,” she says. “I always feel open to discussing technology with them because they have so much knowledge in the field.” Students and teachers from San Diego’s Canyon Crest Academy line up to tour the IEEE-USA MOVE truck during its stop at the high school.Ananya Yanduru Participating in ham radio, 3D printing, and coding clubs It’s no surprise Yanduru was interested in the MOVE’s communication system. She is a cofounder and copresident of her school’s radio engineering club, which has 10 members. It teaches students about topics they need to know to pass the amateur radio licensing test. Yanduru is a licensed amateur radio operator. Her call sign is K06BAM. “Getting a license sounds cool to a lot of high school students,” she says, “so as the founders, we thought the club would get more interest if we showed them an easy way to get their ham radio license.” Now that most members have a license, they decided to participate in other activities. They first chose NASA’s Radio JOVE. The citizen science project provides kits for building a simple radio telescope to conduct scientific analysis of planets, the Milky Way, and Earth-based radio emissions. The findings are then shared with radio observatories via the Internet. The club’s students plan to build their telescope during summer break, Yanduru says, adding that in the next school year they’ll conduct experiments about energy coming from Jupiter, then will send their results to NASA for analysis. Yanduru also helped establish the school’s 3D printing club. She teaches club members how to print. The six members also help teachers repair the printers. Another hobby of hers is writing code. She is secretary of the academy’s Girls Who Code club, which has about 20 members, not including the classmates they teach. The program aims to increase the number of women in the tech field by teaching coding. She is sharing the knowledge she gains from the club as a volunteer teaching assistant for the League of Amazing Programmers. The San Diego–based nonprofit after-school program trains students in grades 5 to 12 on Java and Python. “I really like being part of all the clubs,” she says, “because they use different aspects of engineering. For 3D, you really get to see the creative and the physical aspects. Radio is obviously more abstract. And coding is fun.” Yanduru is still a few years away from attending college, but she says she plans to pursue an engineering degree. Choosing which field is a dilemma, she says. “There’s a lot of things in electrical engineering and computer engineering that I find interesting,” she says. “I’ll definitely be studying something in one of those fields.”

  • How to Build EV Motors Without Rare Earth Elements
    by Vandana Rallabandi on 2. Jula 2024. at 13:00

    The dilemma is easy to describe. Global efforts to combat climate change hinge on pivoting sharply away from fossil fuels. To do that will require electrifying transportation, primarily by shifting from vehicles with combustion engines to ones with electric drive trains. Such a massive shift will inevitably mean far greater use of electric traction motors, nearly all of which rely on magnets that contain rare earth elements, which cause substantial environmental degradation when their ores are extracted and then processed into industrially useful forms. And for automakers outside of China, there is an additional deterrent: Roughly 90 percent of processed rare earth elements now come from China, so for these companies, increasing dependence on rare earths means growing vulnerability in critical supply chains. Against this backdrop, massive efforts are underway to design and test advanced electric-vehicle (EV) motors that do not use rare earth elements (or use relatively little of them). Government agencies, companies, and universities are working on this challenge, oftentimes in collaborative efforts, in virtually all industrialized countries. In the United States, these initiatives include long-standing efforts at the country’s national laboratories to develop permanent magnets and motor designs that do not use rare earth elements. Also, in a collaboration announced last November, General Motors and Stellantis are working with a startup company, Niron Magnetics, to develop EV motors based on Niron’s rare earth–free permanent magnet. Another automaker, Tesla, shocked observers in March of last year when a senior official declared that the company’s “next drive unit,” which would be based on a permanent magnet, would nevertheless use no “rare earth elements at all.” In Europe, a consortium called Passenger includes 20 partners from industry and academia working on rare earth–free permanent magnets for EVs. We have been working for nearly a decade on magnetic and other aspects of traction-motor design at Oak Ridge National Laboratory (ORNL), in Tennessee, a hub of U.S. research on advanced motors for EVs. Along with colleagues from the National Renewable Energy Laboratory, Ames Laboratory, and the University of Wisconsin, Madison, we have been studying advanced motor concepts as part of the U.S. Department of Energy’s U.S. Drive Technologies Consortium. The group also includes Sandia National Laboratories, Purdue University, and the Illinois Institute of Technology. With all of this activity, you would think that engineers would have by now developed a sophisticated understanding of what is possible with rare earth–free electric motors. And indeed they have. We and other researchers are evaluating promising permanent-magnet materials that don’t use rare earth elements, and we are evaluating possible motor-design changes required to best use these materials. We are also evaluating advanced motor designs that do not use permanent magnets at all. The bottom line is that replacing rare earth–based magnets with non–rare earth ones comes at a cost: degraded motor performance. But innovations in design, manufacturing, and materials will be able to offset—maybe even entirely—this gap in performance. Already, there are a few reports of tantalizing results with innovative new motors whose performance is said to be on a par with the best permanent-magnet synchronous motors. Why rare earths make the most powerful electric motors Rare earth elements (which people in our line of work often refer to as REEs) have unique properties that make them indispensable to many forms of modern technology. Some of these elements, such as neodymium, samarium, dysprosium, and terbium, can be combined with ferromagnetic elements such as iron and cobalt to produce crystals that are not only highly magnetic but also strongly resist demagnetization. The metric typically used to gauge these important qualities of a magnet is called the maximum energy product, measured in megagauss-oersteds (MGOe). The strongest and most commercially successful permanent magnets yet invented, neodymium iron boron, have energy products in the range of 30 to 55 MGOe. For an electric motor based on permanent magnets, the stronger its magnets, the more efficient, compact, and lightweight the motor can be. So the highest-performing EV motors today all use neodymium iron boron magnets. Nevertheless, clever motor design can reduce the performance gap between motors based on rare earth permanent magnets and ones based on other types of magnets. To understand how, you need to know a little more about electric motors. The most common type of traction motor in electric vehicles is the interior-mount permanent-magnet synchronous motor. Permanent magnets inside the rotor interact with a rotating magnetic field created by electromagnet windings in the stator, which surrounds the rotor.Oak Ridge National Laboratory There are two basic types of electric motors: synchronous and induction. Most modern electric vehicles use a type of synchronous motor that has a rotor equipped with permanent magnets. Induction motors use only electromagnets and are therefore inherently rare earth–free. But they are not used today in most EV models because their performance is generally not on a par with permanent-magnet synchronous motors, although several R&D projects in the United States, Europe, and Asia are trying to improve induction motors. The term “synchronous motors” refers to the fact that the rotor of the motor (the part that turns) rotates in synchrony with the changing magnetic fields produced by the stator (the part that remains stationary). In the rotor, permanent magnets are embedded in a circle around the structure. In the stator, also in a circular arrangement, electromagnets are pulsed with electricity one after another to set up a rotating magnetic field. This process causes the rotor magnets and stator magnets to attract and repel one another sequentially, producing rotation and torque. Synchronous motors, too, fall into several categories. Two important types are surface-mount permanent-magnet synchronous motors and synchronous reluctance motors. In the former group, permanent magnets are mounted on the external surface of the rotor, and torque is produced because different parts of the stator and rotor either attract or repel. In a synchronous reluctance motor, on the other hand, the rotor doesn’t need to have permanent magnets at all. What makes the motor spin is a phenomenon called magnetic reluctance, which refers to how much a material opposes magnetic flux passing through it. Ferromagnetic materials have low values of reluctance and will tend to align themselves with strong magnetic fields. This phenomenon is exploited to cause a ferromagnetic rotor, in a reluctance motor, to spin. (Some reluctance motors also employ permanent magnets to assist that rotation.) If a motor depends mainly on the interaction between the stator and rotor magnetic fields, it is called a permanent-magnet dominated motor. If on the other hand it depends on the torque produced by differences in reluctance, it is a permanent-magnet assisted motor. The combined use of both types of torque—that produced by the attraction and repulsion of permanent magnets and that produced by the tendency of magnetic lines of force to flow along a path of least reluctance—is the key strategy being used by engineers striving to achieve high performance in a motor that is less reliant on REE magnets. Replacing REE-based magnets with non-REE ones comes at a cost: degraded motor performance. But innovations in motor design, manufacturing, and materials will be able to offset—maybe even entirely—this gap in performance. The most common motor type at the moment combining the two kinds of torque is the interior-mount permanent-magnet motor, in which the permanent magnets embedded within the rotor add to the reluctance torque. Many commercial EV manufacturers, including GM, Tesla, and Toyota, now use this type of rotor design. The design of the motors for the Toyota Prius underscores the effectiveness of this approach. In these motors, the magnet mass decreased significantly over a period of 13 years, from 1.2 kilograms in the 2004 Prius to about 0.5 kg in the 2017 Prius. Much the same occurred with the Chevrolet Bolt motor, which reduced the overall usage of magnet material by 30 percent compared with the motor in its predecessor, the Chevrolet Spark. Wringing the most out of permanent magnets without rare earths But what about getting rid of REEs entirely? Here, there are two possibilities: Use REE-free permanent magnets in a motor designed to make the most of them, or use a motor that dispenses with permanent magnets entirely, in favor of electromagnets. To understand the suitability of a particular REE-free permanent magnet for use in a powerful traction motor, you have to consider a couple of additional characteristics of a permanent magnet: remanence and coercivity. To begin with, recall the metric used to compare the strength of different permanent-magnet materials: maximum energy product. These three parameters—maximum energy product, remanence, and coercivity—largely indicate how well a permanent-magnet material will perform in an electric motor. Remanence indicates the amount of magnetic intensity, as measured by the density of the lines of force, left in a permanent magnet after the magnetic field that magnetized this magnet is withdrawn. Remanence is important because without it you wouldn’t have a permanent magnet. And the higher the remanence of the material, the stronger the forces of magnetic attraction and repulsion that create torque. The coercivity of a permanent magnet is a measure of its ability to resist demagnetization. The higher the value of coercivity, the harder it is to demagnetize the magnet with an external magnetic field. For an EV traction motor, an optimal permanent magnet, such as neodymium iron boron, has high maximum energy product, high remanence, and high coercivity. No REE-free permanent magnet has all of these characteristics. So if you replace neodymium iron boron magnets with, say, ferrite magnets in a motor, you can expect a decrease in torque output and also must accept a greater risk that the magnets will demagnetize during operation. An experimental motor built by the authors at Oak Ridge National Laboratory did not use any heavy rare earth elements. Neodymium iron boron permanent magnets are mounted on the external surface of the rotor. These magnets are represented by the teal-colored ring of blocks surrounding the copper-colored stator windings. To save space, the motor’s inverter and control electronics were installed inside the stator.Oak Ridge National Laboratory Motor engineers can minimize the difference by designing a motor that exploits both permanent magnets and reluctance. But even with a highly optimized design, a motor based on ferrite magnets will be considerably heavier—perhaps a third or more—if it is to achieve the same performance as a motor with rare earth magnets. One technique used to wring maximum performance out of ferrite magnets is to concentrate the flux from those magnets to the maximum extent possible. It’s analogous to passing moving water through a funnel: The water moves faster in the narrow opening. Researchers have built such machines, called spoke-ferrite magnet motors, but have found them to be about 30 percent heavier than comparable motors based on REE magnets. And there’s more bad news: Spoke-type motors can be complex to manufacture and pose mechanical challenges. Some designers have proposed using another kind of non-REE magnet, one made from an aluminum nickel cobalt alloy called alnico, commonly used in the magnets that hold refrigerator doors shut. Although alnico magnets have high remanence, their coercivity is quite low, making them prone to demagnetization. To address this issue, several researchers have studied and designed variable-flux memory motors, which use a magnetizing component of current to aid in torque production, in effect keeping the magnets from demagnetizing during operation. Additionally, researchers from the Ames Laboratory have shown that alnico magnets can have increased coercivity while maintaining their high remanence. Three parameters—maximum energy product, remanence, and coercivity—largely indicate how a permanent magnet material will perform in an electric motor. Lately, there’s been a lot of attention focused on a new type of permanent-magnet material, iron nitride (FeN). This magnet, produced by Niron Magnetics, has high remanence, equivalent to that of REE-magnets, but like alnico has low coercivity— about a fifth of a comparable neodymium iron boron magnet. Because of these fundamentally different properties, FeN magnets require the development of new rotor designs, which will probably resemble those of past alnico motors. Niron is now developing such designs with automotive partners, including General Motors. Yet another REE-free permanent-magnet material that comes up in discussions of future motors is manganese bismuth (MnBi), which has been the subject of collaborative research at the University of Pittsburgh, Iowa State University, and Powdermet Inc. Together these engineers designed a surface-mount permanent-magnet synchronous motor using MnBi magnets. The remanence and coercivity of these magnets is higher than ferrite magnets but lower than neodymium iron boron (NdFeB). The researchers found that a MnBi-magnet motor can produce the same torque output as a NdFeB-magnet motor but with substantial compromises: a whopping 60 percent increase in volume and a 65 percent increase in weight. On the bright side, the researchers suggested that replacing NdFeB magnets with MnBi magnets could reduce the overall cost of the motor by 32 percent. Another strategy for reducing rare earth content in motors involves eliminating just the heavy rare earth elements used in some of these magnets. NdFeB magnets, for example, typically contain small amounts of the heavy rare earth element dysprosium, used to increase their coercivity at high temperatures. (Heavy rare earth metals are generally in shorter supply than the light rare earths, such as neodymium.) The rub with not using them is that high-temperature coercivity then suffers. So the major challenge in designing this kind of motor is keeping the rotor cool. Last year, at Oak Ridge National Laboratory, we developed a 100-kilowatt traction motor that uses no heavy rare earth elements in its magnets. Another nice feature is that its power electronics are integrated inside of it. These power electronics included the inverter, which takes direct-current power from the battery and feeds the motor with alternating current at the proper frequency to drive the machine. We faced several fundamental challenges in keeping the magnets from getting too hot. You see, permanent magnets are good conductors. And when an electrical conductor moves in a magnetic field, which is what rotor magnets do while the motor is operating, currents are induced in it. These currents, which do not contribute to the torque, heat up the magnets and can demagnetize them. One way to reduce this heating is to break up the path of the circulating currents by making the magnets from thin segments that are electrically insulated from one another. In our motor, each of these segments was only 1 millimeter thick. We chose to use a grade of NdFeB magnets called N50 that can operate at temperatures up to 80 °C. Also, we needed to use a carbon-fiber-and-epoxy system to reinforce the outer diameter of the rotor to let it spin at speeds as high as 20,000 rpm. After analyzing our motor prototype, we discovered it would be necessary to force air through the motor to reduce its temperature when operating at maximum speed. While that’s not ideal, it’s a reasonable compromise to avoid having to use heavy REEs in the design. New approaches for advanced motors Perhaps the most attractive near-term option to make powerful motors that lack REEs entirely is to build synchronous motors that have rotors equipped with electromagnets (meaning coils of wire), either with or without ferrite magnets included with them. But doing that requires that you somehow pass electrical current to those spinning coils. The traditional solution is to use carbon brushes to make electrical contact with spinning metal rings, called slip rings. This technique allows you to apply direct current to the rotor to energize its electromagnets. Those brushes produce dust, though, and eventually wear out, so these motors aren’t suitable for use in EVs. To address this issue, engineers have devised what are called rotary transformers or exciters. They employ an inductive or capacitive system to transfer power wirelessly to the spinning rotor. These motors have a great advantage over conventional, permanent-magnet synchronous motors, which is that their rotor’s magnetic field can be precisely adjusted, simply by controlling the current to the rotor’s electromagnets. That in turn permits a technique called field weakening, which allows high efficiency to be maintained through a wide range of operating speeds. In the way they produce torque, synchronous electric motor types can be thought of as existing on a continuum between two different extremes. At the upper left in this chart is the surface permanent-magnet motor, which produces torque solely from the interaction between permanent magnets in the rotor and electromagnets in the stator. At the lower right is the synchronous reluctance motor, which creates torque by exploiting an entirely different phenomenon—magnetic reluctance, which refers to how much a material opposes magnetic flux passing through it. Most motor designs maximize torque by combining these two kinds of torque.Oak Ridge National Laboratory A notable recent example is a motor built by the automotive supplier ZF Group. Last year the company announced it had produced a synchronous motor in which electromagnets in the rotor are powered by an inductive system that fits inside the machine’s rotor shaft. The 220-kW motor has power-density and efficiency characteristics on a par with those of the NdFeB permanent-magnet motors now used in EVs, according to a company official. New materials can also help bridge the gap between REE-magnet and non-REE-magnet motors. For example, high-silicon steel, renowned for its superior magnetic properties, emerges as a promising candidate for rotor construction, offering the potential to improve the magnetic efficiency of REE-free motors. Concurrently, using high-conductivity copper alloys or ultraconducting copper strands can greatly reduce electrical losses and improve overall performance. Doubling the conductivity of copper, for example, could reduce the volume of certain motors by 30 percent. The strategic integration of such materials could dramatically narrow the performance gap between REE-containing and REE-free motors. Another good example of an advanced material that could make a big difference is a dual-phase magnetic material developed by GE Aerospace, which can be magnetized either very strongly or not at all in specified areas. By selectively making certain sections of the rotor nonmagnetic, the GE Aerospace team demonstrated that it is possible to eliminate virtually all magnetic leakage, which in turn allowed them to forgo using rare earth permanent magnets in the motor. How engineers will navigate the transition to REE-free motors The transition toward rare earth–free motors for EVs is a major and pivotal engineering endeavor. It will be difficult, but research is beginning to yield intriguing and encouraging results. There will soon be multiple designs available—with, alas, a complex array of trade-offs. Motor weight, power density, cost, manufacturability, and overall performance dynamics will all be important considerations. And success in the marketplace will no doubt depend on an equally complex set of economic factors, so it’s very hard to predict which designs will dominate. What’s becoming clear, though, is that it’s perfectly feasible that REE-free motors will one day become mainstream. That outcome will require continued and concerted effort. But we see no reason why engineers can’t navigate the complexities of this transition, ensuring that the next generation of EVs is more environmentally benign. Already, at ORNL and elsewhere, AI-enabled motor-design tools are accelerating the development of these REE-free motors. Today, the large-scale use of REE magnets is marked by arguments pitting technological benefits against environmental and ethical considerations. Soon, those arguments could be much less relevant. We’re not there yet. As with any major technological transition, the journey to rare earth–free motors won’t be short or straight. But it will be a journey well worth taking.

  • Taenzer Fellowship for Disability-Engaged Journalism
    by Stephen Cass on 1. Jula 2024. at 18:18

    Open Call for Applications and Nominations: Do you know of a passionate disabled writer who is eager to explore the intersection of journalism, technology and disability? Do you aspire to shed a critical light on the impact of assistive technologies and mainstream technologies through a disabled lens? If so, we invite you to apply or nominate a deserving candidate for IEEE Spectrum’s Taenzer Fellowship for Disability-Engaged Technology Journalism. About the Fellowship: Our Fellowship for Disability-Engaged Journalism was created to resource new, early- and mid-career disabled journalists as they produce compelling narratives that spotlight the everyday and unique challenges and ideas that disabled people encounter with technology. Whether you’re an experienced journalist or an emerging writer, this fellowship offers a unique opportunity to develop your craft, while being supported with the accommodations you need to pursue stories. The Fellowship will run through the end of 2025. Benefits of the Fellowship: Stipend: Fellows will receive a $2500 one-time stipend to support their commitment to investigative reporting on disability-related tech topics. Compensation: Ordinary professional freelance compensation will be provided by contract for the stories developed during the fellowship period, ensuring that fellows are recognized for their valuable contributions. Coverage of Assistive Services: We understand the importance of accessibility in journalism. Therefore, as part of assignment contracts, the fellowship will cover expenses for assistive services required by fellows to pursue their stories. These services include but are not limited to American Sign Language (ASL) interpreters, mobility aids, and document conversion. Schedule Flexibility: This fellowship will represent a part-time commitment, and as such can be maintained alongside other part-time work, freelance or gig work, and other schedule obligations. The requirement is that Taenzer Fellows have sufficient time to engage with the program through development workshops and report and write multiple news stories and one feature-length article during the course of their 18-month fellowship period. The scheduling will be as flexible as possible. Eligibility Criteria: - Journalists at early- to mid-career stages, including freelancers, are encouraged to apply, or those interested in entering journalism. - Demonstrated interest or experience in writing about technology, disability rights, or related topics. - Capacity to commit to the fellowship’s duration and deliver high-quality journalistic work. How to Apply or nominate a candidate: To apply or to nominate a deserving candidate for this fellowship, please submit the following materials: 1. A resume or curriculum vitae highlighting the candidates journalism experience and relevant achievements, or an email address where we can request a resume from nominees. 2. A statement of interest (500 words maximum) outlining your motivation for nominating someone or applying, your/their experience or interest in disability-engaged journalism, and what areas of technology you/they are interested in examining. 3. Three samples of the candidate’s published writing, preferably showcasing their ability to cover topics related to assistive technologies, disability rights, or technology through a disability lens. Self-published items, such as blog posts, or items written for limited circulation venues, such as an organizational newsletter will be considered. Submission Deadline: August 1st, 2024. Please send your application materials to cass.s@ieee.org with the subject line “Taenzer Fellowship Application.” Contact Information: For inquiries or further information, please contact Stephen Cass at cass.s@ieee.org or Margo Anderson at m.k.anderson@ieee.org. Join us in making a meaningful impact through storytelling that fosters understanding, empathy, and inclusivity in journalism. Nominate someone now for the IEEE Spectrum Taenzer Fellowship for Disability-Engaged Technology Journalism and be a catalyst for change in the technology media world today!

  • The Best Bionic Leg Yet
    by Greg Uyeno on 1. Jula 2024. at 16:59

    For the first time, a small group of patients with amputations below the knee were able to control the movements of their prosthetic legs through neural signals—rather than relying on programmed cycles for all or part of a motion—and resume walking with a natural gait. The achievement required a specialized amputation surgery combined with a non-invasive surface electrode connection to a robotic prosthetic lower leg. A study describing the technologies was published today in the journal Nature Medicine. “What happens then is quite miraculous. The patients that have this neural interface are able to walk at normal speeds; and up and down steps and slopes; and maneuver obstacles really without thinking about it. It’s natural. It’s involuntary,” said co-author Hugh Herr, who develops bionic prosthetics at the MIT Media Lab. “Even though their limb is made of titanium and silicone—all these various electromechanical components—the limb feels natural and it moves naturally, even without conscious thought.” The approach relies on surgery at the amputation site to create what the researchers call an agonist-antagonist myoneural Interface, or AMI. The procedure involves connecting pairs of muscles (in the case of below-the-knee amputation, two pairs), as well as the introduction of proprietary synthetic elements. The interface creates a two-way connection between body and machine. Muscle-sensing electrodes send signals to a small computer in the prosthetic limb that interprets them as angles and forces for joints at the ankle and ball of the foot. It also sends information back about the position of the artificial leg, restoring a sense of where the limb is in space, also known as proprioception. Video 1 www.youtube.com “The particular mode of control is far beyond what anybody else has come up with,” said Daniel Ferris, a neuromechanical engineer at the University of Florida; Ferris was not involved in the study, but has worked on neural interfaces for controlling lower limb prostheses. “It’s a really novel idea that they’ve built on over the last eight years that’s showing really positive outcomes for better bionic lower legs.” The latest publication is notable for a larger participant pool than previous studies, with seven treatment patients and seven control patients with amputations and typical prosthetic legs. To test the bionic legs, patients were asked to walk on level ground at different speeds; up and down slopes and stairs; and to maneuver around obstacles. The AMI users had a more natural gait, more closely resembling movement by someone using a natural limb. More naturalistic motion can improve freedom of movement, particularly over challenging terrain, but in other studies researchers have also noted reduced energetic costs, reduced stress on the body, and even social benefits for some amputees. Co-author Hyungeun Song, a postdoctoral researcher at MIT, says the group was surprised by the efficiency of the bionic setup. The prosthetic interface sent just 18 percent of the typical amount of information that’s sent from a limb to the spine, yet it was enough to allow patients to walk with what was considered a normal gait. Next Steps for the Bionic Leg AMI amputations have now become the standard at Brigham and Women’s Hospital in Massachusetts, where co-author Matthew Carty works. And because of patient benefits in terms of pain and ease of using even passive (or non-robotic) prosthetics, this technique—or something similar—could spread well beyond the current research setting. To date, roughly 60 people worldwide have received AMI surgery above or below either an elbow or knee. In principle, Herr said, someone with a previously amputated limb, such as himself, could undergo AMI rehabilitation, and he is strongly considering the procedure. More than 2 million Americans are currently living with a lost limb, according to the Amputee Coalition, and nearly 200,000 lower legs are amputated each year in the United States. On the robotics side, there are already commercial leg prosthetics that could be made compatible with the neural interface. The area in greatest need of development is the connection between amputation site and prosthesis. Herr says commercialization of that interface might be around five years away. Herr says his long-term goal is neural integration and embodiment: the sense that a prosthetic is part of the body, rather than a tool. The new study “is a critical step forward—pun intended.”

  • Shipt’s Algorithm Squeezed Gig Workers. They Fought Back
    by Dana Calacci on 1. Jula 2024. at 13:00

    In early 2020, gig workers for the app-based delivery company Shipt noticed something strange about their paychecks. The company, which had been acquired by Target in 2017 for US $550 million, offered same-day delivery from local stores. Those deliveries were made by Shipt workers, who shopped for the items and drove them to customers’ doorsteps. Business was booming at the start of the pandemic, as the COVID-19 lockdowns kept people in their homes, and yet workers found that their paychecks had become…unpredictable. They were doing the same work they’d always done, yet their paychecks were often less than they expected. And they didn’t know why. On Facebook and Reddit, workers compared notes. Previously, they’d known what to expect from their pay because Shipt had a formula: It gave workers a base pay of $5 per delivery plus 7.5 percent of the total amount of the customer’s order through the app. That formula allowed workers to look at order amounts and choose jobs that were worth their time. But Shipt had changed the payment rules without alerting workers. When the company finally issued a press release about the change, it revealed only that the new pay algorithm paid workers based on “effort,” which included factors like the order amount, the estimated amount of time required for shopping, and the mileage driven. The Shopper Transparency Tool used optical character recognition to parse workers’ screenshots and find the relevant information (A). The data from each worker was stored and analyzed (B), and workers could interact with the tool by sending various commands to learn more about their pay (C). Dana Calacci The company claimed this new approach was fairer to workers and that it better matched the pay to the labor required for an order. Many workers, however, just saw their paychecks dwindling. And since Shipt didn’t release detailed information about the algorithm, it was essentially a black box that the workers couldn’t see inside. The workers could have quietly accepted their fate, or sought employment elsewhere. Instead, they banded together, gathering data and forming partnerships with researchers and organizations to help them make sense of their pay data. I’m a data scientist; I was drawn into the campaign in the summer of 2020, and I proceeded to build an SMS-based tool—the Shopper Transparency Calculator—to collect and analyze the data. With the help of that tool, the organized workers and their supporters essentially audited the algorithm and found that it had given 40 percent of workers substantial pay cuts. The workers showed that it’s possible to fight back against the opaque authority of algorithms, creating transparency despite a corporation’s wishes. How We Built a Tool to Audit Shipt It started with a Shipt worker named Willy Solis, who noticed that many of his fellow workers were posting in the online forums about their unpredictable pay. He wanted to understand how the pay algorithm had changed, and he figured that the first step was documentation. At that time, every worker hired by Shipt was added to a Facebook group called the Shipt List, which was administered by the company. Solis posted messages there inviting people to join a different, worker-run Facebook group. Through that second group, he asked workers to send him screenshots showing their pay receipts from different months. He manually entered all the information into a spreadsheet, hoping that he’d see patterns and thinking that maybe he’d go to the media with the story. But he was getting thousands of screenshots, and it was taking a huge amount of time just to update the spreadsheet. The Shipt Calculator: Challenging Gig Economy Black-box Algorithms with Worker Pay Stubs youtu.be That’s when Solis contacted Coworker, a nonprofit organization that supports worker advocacy by helping with petitions, data analysis, and campaigns. Drew Ambrogi, then Coworker’s director of digital campaigns, introduced Solis to me. I was working on my Ph.D. at the MIT Media Lab, but feeling somewhat disillusioned about it. That’s because my research had focused on gathering data from communities for analysis, but without any community involvement. I saw the Shipt case as a way to work with a community and help its members control and leverage their own data. I’d been reading about the experiences of delivery gig workers during the pandemic, who were suddenly considered essential workers but whose working conditions had only gotten worse. When Ambrogi told me that Solis had been collecting data about Shipt workers’ pay but didn’t know what to do with it, I saw a way to be useful. Throughout the worker protests, Shipt said only that it had updated its pay algorithm to better match payments to the labor required for jobs; it wouldn’t provide detailed information about the new algorithm. Its corporate photographs present idealized versions of happy Shipt shoppers. Shipt Companies whose business models rely on gig workers have an interest in keeping their algorithms opaque. This “information asymmetry” helps companies better control their workforces—they set the terms without divulging details, and workers’ only choice is whether or not to accept those terms. The companies can, for example, vary pay structures from week to week, experimenting to find out, essentially, how little they can pay and still have workers accept the jobs. There’s no technical reason why these algorithms need to be black boxes; the real reason is to maintain the power structure. For Shipt workers, gathering data was a way to gain leverage. Solis had started a community-driven research project that was collecting good data, but in an inefficient way. I wanted to automate his data collection so he could do it faster and at a larger scale. At first, I thought we’d create a website where workers could upload their data. But Solis explained that we needed to build a system that workers could easily access with just their phones, and he argued that a system based on text messages would be the most reliable way to engage workers. Based on that input, I created a textbot: Any Shipt worker could send screenshots of their pay receipts to the textbot and get automated responses with information about their situation. I coded the textbot in simple Python script and ran it on my home server; we used a service called Twilio to send and receive the texts. The system used optical character recognition—the same technology that lets you search for a word in a PDF file—to parse the image of the screenshot and pull out the relevant information. It collected details about the worker’s pay from Shipt, any tip from the customer, and the time, date, and location of the job, and it put everything in a Google spreadsheet. The character-recognition system was fragile, because I’d coded it to look for specific pieces of information in certain places on the screenshot. A few months into the project, when Shipt did an update and the workers’ pay receipts suddenly looked different, we had to scramble to update our system. In addition to fair pay, workers also want transparency and agency. Each person who sent in screenshots had a unique ID tied to their phone number, but the only demographic information we collected was the worker’s metro area. From a research perspective, it would have been interesting to see if pay rates had any connection to other demographics, like age, race, or gender, but we wanted to assure workers of their anonymity, so they wouldn’t worry about Shipt firing them just because they had participated in the project. Sharing data about their work was technically against the company’s terms of service; astoundingly, workers—including gig workers who are classified as “independent contractors”— often don’t have rights to their own data. Once the system was ready, Solis and his allies spread the word via a mailing list and workers’ groups on Facebook and WhatsApp. They called the tool the Shopper Transparency Calculator and urged people to send in screenshots. Once an individual had sent in 10 screenshots, they would get a message with an initial analysis of their particular situation: The tool determined whether the person was getting paid under the new algorithm, and if so, it stated how much more or less money they’d have earned if Shipt hadn’t changed its pay system. A worker could also request information about how much of their income came from tips and how much other shoppers in their metro area were earning. How the Shipt Pay Algorithm Shortchanged Workers By October of 2020, we had received more than 5,600 screenshots from more than 200 workers, and we paused our data collection to crunch the numbers. We found that 40 percent of workers were earning less under the new algorithm, with half of those workers receiving a pay cut of 10 percent or greater. What’s more, looking at data from all geographic regions, we found that about one-third of workers were earning less than their state’s minimum wage. It wasn’t a clear case of wage theft, because 60 percent of workers were making about the same or slightly more under the new scheme. But we felt that it was important to shine a light on those 40 percent of workers who had gotten an unannounced pay cut through a black box transition. In addition to fair pay, workers also want transparency and agency. This project highlighted how much effort and infrastructure it took for Shipt workers to get that transparency: It took a motivated worker, a research project, a data scientist, and custom software to reveal basic information about these workers’ conditions. In a fairer world where workers have basic data rights and regulations require companies to disclose information about the AI systems they use in the workplace, this transparency would be available to workers by default. Our research didn’t determine how the new algorithm arrived at its payment amounts. But a July 2020 blog post from Shipt’s technical team talked about the data the company possessed about the size of the stores it worked with and their calculations for how long it would take a shopper to walk through the space. Our best guess was that Shipt’s new pay algorithm estimated the amount of time it would take for a worker to complete an order (including both time spent finding items in the store and driving time) and then tried to pay them $15 per hour. It seemed likely that the workers who received a pay cut took more time than the algorithm’s prediction. Shipt workers protested in front of the headquarters of Target (which owns Shipt) in October 2020. They demanded the company’s return to a pay algorithm that paid workers based on a simple and transparent formula. The SHIpT List Solis and his allies used the results to get media attention as they organized strikes, boycotts, and a protest at Shipt headquarters in Birmingham, Ala., and Target’s headquarters in Minneapolis. They asked for a meeting with Shipt executives, but they never got a direct response from the company. Its statements to the media were maddeningly vague, saying only that the new payment algorithm compensated workers based on the effort required for a job, and implying that workers had the upper hand because they could “choose whether or not they want to accept an order.” Did the protests and news coverage have an effect on worker conditions? We don’t know, and that’s disheartening. But our experiment served as an example for other gig workers who want to use data to organize, and it raised awareness about the downsides of algorithmic management. What’s needed is wholesale changes to platforms’ business models. An Algorithmically Managed Future? Since 2020, there have been a few hopeful steps forward. The European Union recently came to an agreement about a rule aimed at improving the conditions of gig workers. The so-called Platform Workers Directive is considerably watered down from the original proposal, but it does ban platforms from collecting certain types of data about workers, such as biometric data and data about their emotional state. It also gives workers the right to information about how the platform algorithms make decisions and to have automated decisions reviewed and explained, with the platforms paying for the independent reviews. While many worker-rights advocates wish the rule went further, it’s still a good example of regulation that reins in the platforms’ opacity and gives workers back some dignity and agency. Some debates over gig workers’ data rights have even made their way to courtrooms. For example, the Worker Info Exchange, in the United Kingdom, won a case against Uber in 2023 about its automated decisions to fire two drivers. The court ruled that the drivers had to be given information about the reasons for their dismissal so they could meaningfully challenge the robo-firings. In the United States, New York City passed the country’s first minimum-wage law for gig workers, and last year the law survived a legal challenge from DoorDash, Uber, and Grubhub. Before the new law, the city had determined that its 60,000 delivery workers were earning about $7 per hour on average; the law raised the rate to about $20 per hour. But the law does nothing about the power imbalance in gig work—it doesn’t improve workers’ ability to determine their working conditions, gain access to information, reject surveillance, or dispute decisions. Willy Solis spearheaded the effort to determine how Shipt had changed its pay algorithm by organizing his fellow Shipt workers to send in data about their pay—first directly to him, and later using a textbot.Willy Solis Elsewhere in the world, gig workers are coming together to imagine alternatives. Some delivery workers have started worker-owned services and have joined together in an international federation called CoopCycle. When workers own the platforms, they can decide what data they want to collect and how they want to use it. In Indonesia, couriers have created “base camps” where they can recharge their phones, exchange information, and wait for their next order; some have even set up informal emergency response services and insurance-like systems that help couriers who have road accidents. While the story of the Shipt workers’ revolt and audit doesn’t have a fairy-tale ending, I hope it’s still inspiring to other gig workers as well as shift workers whose hours are increasingly controlled by algorithms. Even if they want to know a little more about how the algorithms make their decisions, these workers often lack access to data and technical skills. But if they consider the questions they have about their working conditions, they may realize that they can collect useful data to answer those questions. And there are researchers and technologists who are interested in applying their technical skills to such projects. Gig workers aren’t the only people who should be paying attention to algorithmic management. As artificial intelligence creeps into more sectors of our economy, white-collar workers find themselves subject to automated tools that define their workdays and judge their performance. During the COVID-19 pandemic, when millions of professionals suddenly began working from home, some employers rolled out software that captured screenshots of their employees’ computers and algorithmically scored their productivity. It’s easy to imagine how the current boom in generative AI could build on these foundations: For example, large language models could digest every email and Slack message written by employees to provide managers with summaries of workers’ productivity, work habits, and emotions. These types of technologies not only pose harm to people’s dignity, autonomy, and job satisfaction, they also create information asymmetry that limits people’s ability to challenge or negotiate the terms of their work. We can’t let it come to that. The battles that gig workers are fighting are the leading front in the larger war for workplace rights, which will affect all of us. The time to define the terms of our relationship with algorithms is right now.

  • Industry-Leading Automotive Connectivity Solutions and Extensive Engineering Expertise
    by TE Connectivity on 1. Jula 2024. at 10:00

    This is a sponsored article brought to you by TE Connectivity. In the fast-moving automotive industry, consumer expectations are evolving just as quickly as the technologies and solutions that shape vehicle design. Your consumers want reliable, efficient, and safe vehicles that also incorporate the connected, immersive environment they’ve come to expect from their devices and electronics. The right connectivity solutions can help you deliver next-generation vehicles that exceed driver expectations. TE Connectivity (TE) solutions can be found in nearly every vehicle — making us your go-to, complete connectivity partner for the most advanced vehicle architectures of today and tomorrow. We understand the automotive industry and your challenges, and we offer a broad portfolio of high-performance data, signal, and power connectivity solutions. Using our customer-centric engineering expertise, we’ll help you tackle even your most complex design challenges. We also deliver personalized sales support and a comprehensive distribution network that provides unmatched speed-to-market. TE is more than just a supplier. We are your partner for navigating the road ahead. Explore TE’s innovative automotive solutions, or connect with us today to discuss how to solve your specific design challenges.

  • Persona AI Brings Calm Experience to the Hectic Humanoid Industry
    by Evan Ackerman on 30. Juna 2024. at 13:00

    It may at times seem like there are as many humanoid robotics companies out there as the industry could possibly sustain, but the potential for useful and reliable and affordable humanoids is so huge that there’s plenty of room for any company that can actually get them to work. Joining the dozen or so companies already on this quest is Persona AI, founded last month by Nic Radford and Jerry Pratt, two people who know better than just about anyone what it takes to make a successful robotics company, although they also know enough to be wary of getting into commercial humanoids. Persona AI may not be the first humanoid robotics startup, but its founders have some serious experience in the space: Nic Radford lead the team that developed NASA’s Valkyrie humanoid robot, before founding Houston Mechatronics (now Nauticus Robotics), which introduced a transforming underwater robot in 2019. He also founded Jacobi Motors, which is commercializing variable flux electric motors. Jerry Pratt worked on walking robots for 20 years at the Institute for Human and Machine Cognition (IHMC) in Pensacola, Florida. He co-founded Boardwalk Robotics in 2017, and has spent the last two years as CTO of multi-billion-dollar humanoid startup Figure. “It took me a long time to warm up to this idea,” Nic Radford tells us. “After I left Nauticus in January, I didn’t want anything to do with humanoids, especially underwater humanoids, and I didn’t even want to hear the word ‘robot.’ But things are changing so quickly, and I got excited and called Jerry and I’m like, this is actually very possible.” Jerry Pratt, who recently left Figure due primarily to the two-body problem, seems to be coming from a similar place: “There’s a lot of bashing your head against the wall in robotics, and persistence is so important. Nic and I have both gone through pessimism phases with our robots over the years. We’re a bit more optimistic about the commercial aspects now, but we want to be pragmatic and realistic about things too.” Behind all of the recent humanoid hype lies the very, very difficult problem of making a highly technical piece of hardware and software compete effectively with humans in the labor market. But that’s also a very, very big opportunity—big enough that Persona doesn’t have to be the first company in this space, or the best funded, or the highest profile. They simply have to succeed, but of course sustainable commercial success with any robot (and bipedal robots in particular) is anything but simple. Step one will be building a founding team across two locations: Houston and Pensacola, Fla. But Radford says that the response so far to just a couple of LinkedIn posts about Persona has been “tremendous.” And with a substantial seed investment in the works, Persona will have more than just a vision to attract top talent. For more details about Persona, we spoke with Persona AI co-founders Nic Radford and Jerry Pratt. Why start this company, why now, and why you? Nic Radford Nic Radford: The idea for this started a long time ago. Jerry and I have been working together off and on for quite a while, being in this field and sharing a love for what the humanoid potential is while at the same time being frustrated by where humanoids are at. As far back as probably 2008, we were thinking about starting a humanoids company, but for one reason or another the viability just wasn’t there. We were both recently searching for our next venture and we couldn’t imagine sitting this out completely, so we’re finally going to explore it, although we know better than anyone that robots are really hard. They’re not that hard to build; but they’re hard to make useful and make money with, and the challenge for us is whether we can build a viable business with Persona: can we build a business that uses robots and makes money? That’s our singular focus. We’re pretty sure that this is likely the best time in history to execute on that potential. Jerry Pratt: I’ve been interested in commercializing humanoids for quite a while—thinking about it, and giving it a go here and there, but until recently it has always been the wrong time from both a commercial point of view and a technological readiness point of view. You can think back to the DARPA Robotics Challenge days when we had to wait about 20 seconds to get a good lidar scan and process it, which made it really challenging to do things autonomously. But we’ve gotten much, much better at perception, and now, we can get a whole perception pipeline to run at the framerate of our sensors. That’s probably the main enabling technology that’s happened over the last 10 years. From the commercial point of view, now that we’re showing that this stuff’s feasible, there’s been a lot more pull from the industry side. It’s like we’re at the next stage of the Industrial Revolution, where the harder problems that weren’t roboticized from the 60s until now can now be. And so, there’s really good opportunities in a lot of different use cases. A bunch of companies have started within the last few years, and several were even earlier than that. Are you concerned that you’re too late? Radford: The concern is that we’re still too early! There might only be one Figure out there that raises a billion dollars, but I don’t think that’s going to be the case. There’s going to be multiple winners here, and if the market is as large as people claim it is, you could see quite a diversification of classes of commercial humanoid robots. Jerry Pratt Pratt: We definitely have some catching up to do but we should be able to do that pretty quickly, and I’d say most people really aren’t that far from the starting line at this point. There’s still a lot to do, but all the technology is here now—we know what it takes to put together a really good team and to build robots. We’re also going to do what we can to increase speed, like by starting with a surrogate robot from someone else to get the autonomy team going while building our own robot in parallel. Radford: I also believe that our capital structure is a big deal. We’re taking an anti-stealth approach, and we want to bring everyone along with us as our company grows and give out a significant chunk of the company to early joiners. It was an anxiety of ours that we would be perceived as a me-too and that nobody was going to care, but it’s been the exact opposite with a compelling response from both investors and early potential team members. So your approach here is not to look at all of these other humanoid robotics companies and try and do something they’re not, but instead to pursue similar goals in a similar way in a market where there’s room for all? Pratt: All robotics companies, and AI companies in general, are standing on the shoulders of giants. These are the thousands of robotics and AI researchers that have been collectively bashing their heads against the myriad problems for decades—some of the first humanoids were walking at Waseda University in the late 1960s. While there are some secret sauces that we might bring to the table, it is really the combined efforts of the research community that now enables commercialization. So if you’re at a point where you need something new to be invented in order to get to applications, then you’re in trouble, because with invention you never know how long it’s going to take. What is available today and now, the technology that’s been developed by various communities over the last 50+ years—we all have what we need for the first three applications that are widely mentioned: warehousing, manufacturing, and logistics. The big question is, what’s the fourth application? And the fifth and the sixth? And if you can start detecting those and planning for them, you can get a leg up on everybody else. The difficulty is in the execution and integration. It’s a ten thousand—no, that’s probably too small—it’s a hundred thousand piece puzzle where you gotta get each piece right, and occasionally you lose some pieces on the floor that you just can’t find. So you need a broad team that has expertise in like 30 different disciplines to try to solve the challenge of an end-to-end labor solution with humanoid robots. Radford: The idea is like one percent of starting a company. The rest of it, and why companies fail, is in the execution. Things like, not understanding the market and the product-market fit, or not understanding how to run the company, the dimensions of the actual business. I believe we’re different because with our backgrounds and our experience we bring a very strong view on execution, and that is our focus on day one. There’s enough interest in the VC community that we can fund this company with a singular focus on commercializing humanoids for a couple different verticals. But listen, we got some novel ideas in actuation and other tricks up our sleeve that might be very compelling for this, but we don’t want to emphasize that aspect. I don’t think Persona’s ultimate success comes just from the tech component. I think it comes mostly from ‘do we understand the customer, the market needs, the business model, and can we avoid the mistakes of the past?’ How is that going to change things about the way that you run Persona? Radford: I started a company [Houston Mechatronics] with a bunch of research engineers. They don’t make the best product managers. More broadly, if you’re staffing all your disciplines with roboticists and engineers, you’ll learn that it may not be the most efficient way to bring something to market. Yes, we need those skills. They are essential. But there’s so many other aspects of a business that get overlooked when you’re fundamentally a research lab trying to commercialize a robot. I’ve been there, I’ve done that, and I’m not interested in making that mistake again. Pratt: It’s important to get a really good product team that’s working with a customer from day one to have customer needs drive all the engineering. The other approach is ‘build it and they will come’ but then maybe you don’t build the right thing. Of course, we want to build multi-purpose robots, and we’re steering clear of saying ‘general purpose’ at this point. We don’t want to overfit to any one application, but if we can get to a dozen use cases, two or three per customer site, then we’ve got something. There still seems to be a couple of unsolved technical challenges with humanoids, including hands, batteries, and safety. How will Persona tackle those things? Pratt: Hands are such a hard thing—getting a hand that has the required degrees of freedom and is robust enough that if you accidentally hit it against your table, you’re not just going to break all your fingers. But we’ve seen robotic hand companies popping up now that are showing videos of hitting their hands with a hammer, so I’m hopeful. Getting one to two hours of battery life is relatively achievable. Pushing up towards five hours is super hard. But batteries can now be charged in 20 minutes or so, as long as you’re going from 20 percent to 80 percent. So we’re going to need a cadence where robots are swapping in and out and charging as they go. And batteries will keep getting better. Radford: We do have a focus on safety. It was paramount at NASA, and when we were working on Robonaut, it led to a lot of morphological considerations with padding. In fact, the first concepts and images we have of our robot illustrate extensive padding, but we have to do that carefully, because at the end of the day it’s mass and it’s inertia. What does the near future look like for you? Pratt: Building the team is really important—getting those first 10 to 20 people over the next few months. Then we’ll want to get some hardware and get going really quickly, maybe buying a couple of robot arms or something to get our behavior and learning pipelines going while in parallel starting our own robot design. From our experience, after getting a good team together and starting from a clean sheet, a new robot takes about a year to design and build. And then during that period we’ll be securing a customer or two or three. Radford: We’re also working hard on some very high profile partnerships that could influence our early thinking dramatically. Like Jerry said earlier, it’s a massive 100,000 piece puzzle, and we’re working on the fundamentals: the people, the cash, and the customers.

  • This Wearable Computer Made a Fashion Statement
    by Allison Marsh on 29. Juna 2024. at 13:00

    In 1993, well before Google Glass debuted, the artist Lisa Krohn designed a prototype wearable computer that looked like no other. The Cyberdesk was an experiment in augmented reality. At a time when computers were mostly beige and boxy, Krohn envisioned a pliable, high-tech garment that fused fashion with function. Krohn studied art and architectural history at Brown University and the Rhode Island School of Design (RISD) before completing an MFA at Cranbrook Academy of Art in Bloomfield Hills, Mich., in 1988. With the Cyberdesk, she tapped into a cultural moment in which artists, techies, writers, and others were celebrating the convergence of humans and machines and eagerly anticipating our cyborg future. What is Lisa Krohn’s Cyberdesk? Although a working prototype of the Cyberdesk was never built, the yellow eyepiece suggested a retinal display.Lisa Krohn and Christopher Myers The Cyberdesk, made of resin, plastic, metal, and glass, was meant to be worn like a necklace. The four circles along the breastbone are a four-key keyboard with a large trackball at the top center; the user would use the keyboard and trackball to make selections from menus of options. A small microphone lies against the throat, and an earpiece hooks into the left ear. Krohn imagined the yellow tube in front of the right eye as a retinal scan display that would project a laser beam directly onto the back of the eye, creating a screen centered in the user’s field of vision. In the back, there is a port suggestive of some type of neural link. The Cyberdesk was intended to run on energy harvested from the body’s movement and the sun. A port on the back of the Cyberdesk was intended as a neural link.Lisa Krohn and Christopher Myers Krohn, along with Chris Myers, a student at the Art Center School of Design, made two models of the Cyberdesk, but it was never turned into a working prototype. The underlying technology wasn’t there yet, although there were engineers who were experimenting with similar ideas. For example, Krohn knew about work on virtual retinal displays at the University of Washington’s Human Interface Technology Laboratory, but she didn’t pursue a collaboration. And so Krohn’s design existed as “strategic foresight, speculative technology, predictive design, or design fiction,” she told me in a recent email. Krohn imagined a possible future, one in which, as she notes on her company’s website, “person and machine merge into one seamless collaborative super-being!” In other words, a cyborg. The Cyberdesk wasn’t the only piece of cyborg gear that Krohn designed. In 1988, before the age of smartphones and Web searches, she imagined a wrist computer that combined satellite navigation, a phone, a wristwatch, and a regional information guide. Made of a flexible plastic, it could be folded up and worn as a decorative cuff when not being used as a computer. Lisa Krohn also designed a flexible wrist computer that could be folded up when not in use. Lisa Krohn Krohn designed the wrist computer prototype before “wearable” became a common way to refer to a portable device that incorporates computer technology. Futurist Paul Saffo is credited with first using the term “wearable computer” in an article in InfoWorld in 1991. Saffo predicted the first wearables would be worn on the belts of maintenance workers and then be extended to deskless, information-intensive tasks, such as conducting store inventories. He also suggested a game console consisting of a tiny display integrated into sunglasses and paired with a power glove. Nowhere did he consider technology as a fashion accessory, and I suspect he wasn’t even considering women when he made his predictions. Meanwhile, Steve Mann was working on ideas for mediated vision as a graduate student at MIT. Mann was first inspired to build a better welding mask that would protect the welder’s eyes from the bright electric arc while still allowing a clear view. This led him to think about how to use video cameras, displays, and computers to modify vision in real time. Both Krohn and Mann ran into similar real-world challenges: cellphones, the Internet, civilian GPS, and online databases were still in their infancy, and the hardware was heavy and clunky. While Mann built boxy functional prototypes that he demoed on himself, Krohn imagined more speculative technology. Each “page” of the Krohn’s phonebook represents a separate function—dial phone, answering machine, and printer. Lisa Krohn, Sigmar Willnauer, and Tony Guido Krohn also worked on utilitarian business technologies. In 1987, she designed a prototype for the phonebook, an integrated phone with answering machine and printer. Each “page” of the phonebook had its own function, and an electric switch automatically changed to that function as the page was flipped, with instructions printed on the page. That intuitive design was in sharp contrast to most answering machines of the time, which were clunky and not particularly easy to use. The phonebook was an example of “product semantics,” which holds that a product’s design should help the user understand the product’s function and meaning. At Cranbrook, Krohn studied under Michael and Katherine McCoy, who embraced that theory of design. Krohn and Michael McCoy wrote about that aspect of the phonebook in their 1989 essay “Beyond Beige: Interpretive Design for the Post-Industrial Age”: “The casting of [a] personal electronic device into the mold of [a] personal agenda is an attempt to make a product reach out to its users by informing them about how it operates, where it resides, and how it fits into their lives.” Lisa Krohn championed cyberfeminism and cyborgs Lisa Krohn designed the Cyberdesk in 1993, at a time when wearable computers existed mainly in science fiction. Dietmar Quistorf The Cyberdesk as well as the wrist computer were early examples of designs influenced by cyberfeminism. This feminist movement emerged in the early 1990s as a counter to the dominance of men in computing, gaming, and various Internet spaces. It built on feminist science fiction, such as the writings of Octavia Butler, Vonda McIntyre, and Joanna Russ, as well as the work of hackers, coders, and media artists. Different threads of cyberfeminism developed around the world, especially in Australia, Germany, and the United States. While mainstream depictions of cyborgs continued to tilt masculine, cyberfeminists challenged the patriarchy by experimenting with genderless ideas of cyborgs and recombinants that melded machines, plants, humans, and animals. The feminist theorist and historian of technology Donna Haraway kindled this cyborgian drift through her 1985 essay, “A Manifesto for Cyborgs,” published in the Socialist Review. She argued that as the end of the 20th century approached, we were all becoming cyborgs due to the breakdown of lines dividing humans and machines. Her cyborg theory hinged on communication, and she saw cyborgs as a potential solution that allowed for a fluidity of both language and identity. The essay is considered one of the foundational texts in cyberfeminism, and it was republished in Haraway’s 1990 book, Simians, Cyborgs, and Women: The Reinvention of Nature. Krohn imagined a possible future, one in which “person and machine merge into one seamless collaborative super-being!” In other words, a cyborg. Krohn and McCoy’s 1989 essay also highlighted communication as a central problem in modern design. Mainstream consumer electronics, they argued, had reached a monotonous uniformity of design that favored manufacturing efficiency over conveying the product’s intended function. Both Haraway and Krohn saw opportunities for technology, especially microelectronics, to challenge the restrictions of the past. By embracing the cyborg, both women found new ways to overcome the limits of language and communication and to forge new directions in feminism. Cyberdesk 2.0 I had the privilege of meeting Lisa Krohn when she participated in a roundtable on the Cyberdesk at the 2023 annual meeting of the Society for the History of Technology. The assembled group, which included curators and conservators from the Cooper Hewitt, Smithsonian Design Museum and the San Francisco Museum of Modern Art (each of which has a Cyberdesk prototype in its collection), considered a possible Cyberdesk version 2.0. What would be different if Krohn were designing it today? In 2023, Krohn reimagined the Cyberdesk. It now incorporates technology that hadn’t been available 30 years earlier, such as sensors to monitor brainwaves, hydration, and stress levels.Duvit Mark Kakunegoda The group focused their discussion around the idea of “design futuring,” a concept promoted by Tony Fry in his 2009 book of the same name. Design futuring is a way to actively shape the future, rather than passively trying to predict it and then reacting after the fact. Fry describes how design futuring could be used to promote sustainability. In the case of the Cyberdesk 2.0, a focus on sustainability might lead to a different choice of materials. The original resin provided a malleable material that could mold to the contours of the body. But its long-term stability is terrible. Despite best practices in conservation, the Cyberdesk will likely turn into a goopy mess in the not-too-distant future. (In a previous column, I wrote about a transistorized music box owned by John Bardeen that suffers from the same basic problem of decaying materials, which in curatorial circles is known as “inherent vice.”) The panelists considered alternatives like biomaterials, and they discussed the entire product life cycle, the challenges of electronic waste, and the mining of rare earth elements. They wondered how the design process and the global supply chain might change if such factors were considered from the start, rather than as problems to be solved later. These are just a few of the ideas that percolated while historians, artists, curators, and conservators considered the Cyberdesk. Now imagine if a few engineers were also present. To me, that would have been a really worthwhile discussion. Not only can art unlock creative design and push innovations in new directions, it also allows us to reflect on technology in daily life. And artists can learn from engineers about new materials, technologies, and possibilities. Working together, technology and design no longer need the modifiers speculative and predictive. Engineers and artists can create the future reality. Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the July 2024 print issue as “The Wearable Computer as Bling.” References​ I first learned about Lisa Krohn’s Cyberdesk and design theory at the Society for the History of Technology’s conference in Los Angeles in 2023, during the session “Revisiting Lisa Krohn’s Cyberdesk (1993), a cyberfeminist concept model.” Both the Cooper Hewitt, Smithsonian Design Museum and the San Francisco Museum of Modern Art have featured their respective Cyberdesks in exhibits and online articles. Note that the difference in the colors—SFMOMA’s is white, while Cooper Hewitt’s is brown—is due to the instability of the plastics and resin, as well as variations in the materials.As I considered Krohn’s cyborg designs, I couldn’t help but recall Donna Haraway’s classic essay “A Cyborg Manifesto,” a foundational text in cyberfeminism. Forty years on, we are more cyborgian than Haraway originally posited. Her challenges to traditional notions of identity still resonate with today’s nuanced discussions of gender. Addressing algorithmic bias and generative AI training may be a new frontier for cyberfeminism.

  • Get to Know the IEEE Board of Directors
    by IEEE on 28. Juna 2024. at 18:00

    The IEEE Board of Directors shapes the future direction of IEEE and is committed to ensuring IEEE remains a strong and vibrant organization—serving the needs of its members and the engineering and technology community worldwide—while fulfilling the IEEE mission of advancing technology for the benefit of humanity. This article features IEEE Board of Directors members Deepak Mathur, Saifur Rahman, and Aylin Yener. IEEE Senior Member Deepak Mathur Vice President, Member and Geographic Activities Jaideep Mathur has nearly 40 years of professional experience in electronics and telecommunications at India’s premier public sector oil and gas company, engaged in the exploration and exploitation of hydrocarbons. During his tenure, most recently as chief general manager, he successfully led multidisciplinary teams through significant IT and communications projects. These include supervisory control and data acquisition, online and real-time monitoring systems, WiMax-based broadband wireless access systems, and GPS/GSM-based vehicle tracking systems. Mathur also has experience managing and working on high-tech oil well logging systems, which analyze the properties of the subsurface to explore the possibility of hydrocarbons. Mathur has served in many IEEE leadership roles at the region, section, council, and global levels. A member of the IEEE Industry Applications Society, the IEEE Signal Processing Society and the IEEE Society on Social Implications of Technology, he was the director of IEEE Region 10 (Asia and Pacific), a member of the Board of Governors of the IEEE Society on Social Implications of Technology (2013–2015), and chair of the IEEE India Council (2015–2016). In his current role with IEEE Member and Geographic Activities, Mathur focuses on supporting IEEE members, as well as developing IEEE membership recruitment and retention strategies. Mathur is a member of IEEE-Eta Kappa Nu, the honor society. Throughout his IEEE journey, he has received several prestigious recognitions, including the Region 10 Outstanding Volunteer Award, the MGA Achievement Award, and the India Council Lifetime Achievement Award. Mathur is currently a professor of practice and a member of the academic council at Marwadi University, in Rajkot, India. IEEE Life Fellow Saifur Rahman 2023 IEEE President Chelsea Seeber Rahman is the founding director of the Advanced Research Institute and the Center for Energy and the Global Environment at Virginia Tech, where he researches renewable energy, sensor integration, smart grids, and smart cities. His work promotes clean-tech solutions for climate sustainability, and his six-point solution to reduce carbon dioxide emissions in the electric power sector is being implemented in varying degrees in more than 100 countries. A prolific lecturer, Rahman has made more than 850 presentations at conferences and invited speaking engagements in more than 30 countries. His visionary and innovative leadership approaches and strategies have earned him global recognition. In 2020, he spoke at five different webinars in five countries on four continents in one day. As the 2023 IEEE president, his main priorities were to position the organization as a force for change and to make it more relevant to technology professionals worldwide. Rahman feels that IEEE, as the world’s largest organization of technical professionals, has both the opportunity and the responsibility to address the causes of, mitigate the impact of, and adapt to climate change. His forward-thinking strategies led to the creation of the IEEE Climate Change website and helped foster collaboration among technology and engineering professionals, policymakers, and other organizations to foster a dialogue on sustainable energy policies and practices. Previously, Rahman served as the vice president of IEEE Publication Services and Products (2006) and president of the IEEE Power & Energy Society (2018 and 2019). Rahman has published more than 160 journal papers with over 20,000 citations. He is the founding editor in chief of the IEEE Electrification Magazine and IEEE Transactions on Sustainable Energy. He has also received several IEEE recognitions, including the Power & Energy Society Service Award, PES Outstanding Power Engineering Educator Award, Technical Activities Board Hall of Honor, and IEEE Millennium Medal. IEEE Fellow Aylin Yener Director, Division IX Aylin Yener Yener, an endowed chair professor at The Ohio State University College of Engineering, aims to connect the universe and everyone and everything in it by designing systems that ensure secure and reliable information transfer in a sustainable manner. Her work in communications, information theory, and artificial intelligence covers a wide range of system design topics, from network optimization to security and privacy of information to robust and safe machine-learning algorithms in networked settings. Of particular interest to Yener is next-generation wireless communication and how to create an energy-neutral digital society. She also works to ensure digital connectivity for underserved populations and creating fair and private AI algorithms to aid human ingenuity. Yener has been an active IEEE volunteer for more than two decades, with experience in membership, finances, publications, conferences, and outreach. She has served as president of the IEEE Information Theory Society(2020) and is an active member of the IEEE Signal Processing, IEEE Communications, and IEEE Vehicular Technology societies. As director of Division IX, she advocates for deeper cooperation among societies by sharing best practices and facilitating the cross-pollination of ideas. Yener has been an IEEE distinguished lecturer and is currently the editor in chief of IEEE Transactions on Green Communications and Networking. She has delivered more than 60 technical keynotes and invited lectures in the past 10 years. Yener is committed to a broader educational impact, having cofounded the IEEE North American School of Information Theory, which offers graduate students and postdoctoral researchers the opportunity to learn from leading experts. Yener’s IEEE recognitions include the Marconi Prize Paper Award, Communication Theory Technical Achievement Award, and Women in Communications Engineering Outstanding Achievement Award. She is a fellow of the American Association for the Advancement of Science and a member of the Science Academy of Turkey.

  • Why Not Give Robots Foot-Eyes?
    by Evan Ackerman on 28. Juna 2024. at 16:00

    This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore. One of the (many) great things about robots is that they don’t have to be constrained by how their biological counterparts do things. If you have a particular problem your robot needs to solve, you can get creative with extra sensors: many quadrupeds have side cameras and butt cameras for obstacle avoidance, and humanoids sometimes have chest cameras and knee cameras to help with navigation along with wrist cameras for manipulation. But how far can you take this? I have no idea, but it seems like we haven’t gotten to the end of things yet because now there’s a quadruped with cameras on the bottom of its feet. Sensorized feet is not a new idea; it’s pretty common for quadrupedal robots to have some kind of foot-mounted force sensor to detect ground contact. Putting an actual camera down there is fairly novel, though, because it’s not at all obvious how you’d go about doing it. And the way that roboticists from the Southern University of Science and Technology in Shenzhen went about doing it is, indeed, not at all obvious. Go1’s snazzy feetsies have soles made of transparent acrylic, with slightly flexible plastic structure supporting a 60 millimeter gap up to each camera (640x480 at 120 frames per second) with a quartet of LEDs to provide illumination. While it’s complicated looking, at 120 grams, it doesn’t weigh all that much, and costs only about $50 per foot ($42 of which is the camera). The whole thing is sealed to keep out dirt and water. So why bother with all of this (presumably somewhat fragile) complexity? As we ask quadruped robots to do more useful things in more challenging environments, having more information about what exactly they’re stepping on and how their feet are interacting with the ground is going to be super helpful. Robots that rely only on proprioceptive sensing (sensing self-movement) are great and all, but when you start trying to move over complex surfaces like sand, it can be really helpful to have vision that explicitly shows how your robot is interacting with the surface that it’s stepping on. Preliminary results showed that Foot Vision enabled the Go1 using it to perceive the flow of sand or soil around its foot as it takes a step, which can be used to estimate slippage, the bane of ground-contacting robots. The researchers acknowledge that their hardware could use a bit of robustifying, and they also want to try adding some tread patterns around the circumference of the foot, since that plexiglass window is pretty slippery. The overall idea is to make Foot Vision as useful as the much more common gripper-integrated vision systems for robotic manipulation, helping legged robots make better decisions about how to get where they need to go. Foot Vision: A Vision-Based Multi-Functional Sensorized Foot for Quadruped Robots, by Guowei Shi, Chen Yao, Xin Liu, Yuntian Zhao, Zheng Zhu, and Zhenzhong Jia from Southern University of Science and Technology in Shenzhen, is accepted to the July 2024 issue of IEEE Robotics and Automation Letters.

  • Video Friday: Humanoids Get a Job
    by Evan Ackerman on 28. Juna 2024. at 15:03

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDS IROS 2024: 14–18 October 2024, ABU DHABI, UAE ICSR 2024: 23–26 October 2024, ODENSE, DENMARK Cybathlon 2024: 25–27 October 2024, ZURICH Enjoy today’s videos! Agility has been working with GXO for a bit now, but the big news here (and it IS big news) is that Agility’s Digit robots at GXO now represent the first formal commercial deployment of humanoid robots. [ GXO ] GXO can’t seem to get enough humanoids, because they’re also starting some R&D with Apptronik. [ GXO ] In this paper, we introduce a full-stack system for humanoids to learn motion and autonomous skills from human data. Through shadowing, human operators can teleoperate humanoids to collect whole-body data for learning different tasks in the real world. Using the data collected, we then perform supervised behavior cloning to train skill policies using egocentric vision, allowing humanoids to complete different tasks autonomously by imitating human skills. THAT FACE. [ HumanPlus ] Yeah these robots are impressive but it’s the sound effects that make it. [ Deep Robotics ] Meet CARMEN, short for Cognitively Assistive Robot for Motivation and Neurorehabilitation–a small, tabletop robot designed to help people with mild cognitive impairment (MCI) learn skills to improve memory, attention, and executive functioning at home. [ CARMEN ] via [ UCSD ] Thanks, Ioana! The caption of this video is, “it did not work...” You had one job, e-stop person! ONE JOB! [ WVUIRL ] This is a demo of cutting wood with a saw. When using position control for this task, precise measurement of the cutting amount is necessary. However, by using impedance control, this requirement is eliminated, allowing for successful cutting with only rough commands. [ Tokyo Robotics ] This is mesmerizing. [ Oregon State ] Quadrupeds are really starting to look like the new hotness in bipedal locomotion. [ University of Leeds ] I still think this is a great way of charging a robot. Make sure and watch until the end to see the detach trick. [ YouTube ] The Oasa R1, now on Kickstarter for $1,200, is the world’s first robotic lawn mower that uses one of them old timey reely things for cutting. [ Kickstarter ] ICRA next year is in Atlanta! [ ICRA 2025 ] Our Skunk Works team developed a modified version of the SR-71 Blackbird, titled the M-21, which carried an uncrewed reconnaissance drone called the D-21. The D-21 was designed to capture intelligence, release its camera, then self-destruct! [ Lockheed Martin ] The RPD 35 is a robotic powerhouse that surveys, distributes, and drives wide-flange solar piles up to 19 feet in length. [ Built Robotics ] Field AI’s brain technology is enabling robots to autonomously explore oil and gas facilities, navigating throughout the site and inspecting equipment for anomalies and hazardous conditions. [ Field AI ] Husky Observer was recently deployed at a busy automotive rail yard to carry out various autonomous inspection tasks including measuring train car positions and RFID data collection from the offloaded train inventory. [ Clearpath ] If you’re going to try to land a robot on the Moon, it’s useful to have a little bit of the Moon somewhere to practice on. [ Astrobotic ] Would you swallow a micro-robot? In a gutsy demo, physician Vivek Kumbhari navigates Pillbot, a wireless, disposable robot swallowed onstage by engineer Alex Luebke, modeling how this technology can swiftly provide direct visualization of internal organs. Learn more about how micro-robots could move us past the age of invasive endoscopies and open up doors to more comfortable, affordable medical imaging. [ TED ] How will AI improve our lives in the years to come? From its inception six decades ago to its recent exponential growth, futurist Ray Kurzweil highlights AI’s transformative impact on various fields and explains his prediction for the singularity: the point at which human intelligence merges with machine intelligence. [ TED ]

  • Andrew Ng: Unbiggen AI
    by Eliza Strickland on 9. Februara 2022. at 15:31

    Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A. Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias. Andrew Ng on... What’s next for really big models The career advice he didn’t listen to Defining the data-centric AI movement Synthetic data Why Landing AI asks its customers to do the work The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way? Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions. When you say you want a foundation model for computer vision, what do you mean by that? Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them. What needs to happen for someone to build a foundation model for video? Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision. Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries. Back to top It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users. Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation. “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.” —Andrew Ng, CEO & Founder, Landing AI I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince. I expect they’re both convinced now. Ng: I think so, yes. Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.” Back to top How do you define data-centric AI, and why do you consider it a movement? Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data. When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline. The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up. You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them? Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn. When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set? Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system. “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.” —Andrew Ng For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance. Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training? Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle. One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way. When you talk about engineering the data, what do you mean exactly? Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity. For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow. Back to top What about using synthetic data, is that often a good solution? Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development. Do you mean that synthetic data would allow you to try the model on more data sets? Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category. “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.” —Andrew Ng Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data. Back to top To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment? Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data. One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory. How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up? Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations. In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists? So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work. Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains. Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement? Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it. Back to top This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”

  • How AI Will Change Chip Design
    by Rina Diane Caballar on 8. Februara 2022. at 14:00

    The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process. Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version. But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform. How is AI currently being used to design the next generation of chips? Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider. Heather GorrMathWorks Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI. What are the benefits of using AI for chip design? Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design. So it’s like having a digital twin in a sense? Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end. So, it’s going to be more efficient and, as you said, cheaper? Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering. We’ve talked about the benefits. How about the drawbacks? Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years. Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together. One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge. How can engineers use AI to better prepare and extract insights from hardware or sensor data? Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start. One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI. What should engineers and designers consider when using AI for chip design? Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team. How do you think AI will affect chip designers’ jobs? Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip. How do you envision the future of AI and chip design? Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.

  • Atomically Thin Materials Significantly Shrink Qubits
    by Dexter Johnson on 7. Februara 2022. at 16:12

    Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality. IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability. Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100. “We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.” The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit. Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C). Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another. As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance. In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates. “We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics. On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas. While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor. “What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.” This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits. “The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang. Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.

640px-NATO_OTAN_landscape_logo.svg-2627358850
BHTelecom_Logo
roaming
RIPE_NCC_Logo2015-1162707916
MON_4
mibo-logo
intel_logo-261037782
infobip
bhrt-zuto-logo-1595455966
elektro
eplus_cofund_text_to_right_cropped-1855296649
fmon-logo
h2020-2054048912
H2020_logo_500px-3116878222
huawei-logo-vector-3637103537
Al-Jazeera-Balkans-Logo-1548635272
previous arrowprevious arrow
next arrownext arrow