IEEE Spectrum IEEE Spectrum
- The Toyota Prius Transformed the Auto Industryby Willie D. Jones on 17. Januara 2025. at 19:00
In the early 1990s, Toyota saw that environmental awareness and tighter emissions regulations would shape the future of the automotive industry. The company aimed to create an eco-friendly, efficient vehicle that would meet future standards. In 1997 Toyota introduced the Prius to the Japanese market. The car was the world’s first mass-produced hybrid vehicle that combined gasoline and electric power to reduce fuel consumption and emissions. Its worldwide debut came in 2000. Developing the Prius posed significant technical and market challenges that included designing an efficient hybrid power train, managing battery technology, and overcoming consumer skepticism about combining an electric drivetrain system with the standard gasoline-fueled power train. Toyota persevered, however, and its instincts proved prescient and transformative. “The Prius is not only the world’s first mass-produced hybrid car, but its technical and commercial success also spurred other automakers to accelerate hybrid vehicle development,” says IEEE Member Nobuo Kawaguchi, a professor in the computational science and engineering department at Nagoya University’s Graduate School of Engineering, in Japan. He is also secretary of the IEEE Nagoya Section. “The Prius helped shape the role of hybrid cars in today’s automotive market.” The Prius was honored with an IEEE Milestone on 30 October during a ceremony held at company headquarters in Toyota City, Japan. The G21 project The development of the Prius began in 1993 with the G21 project, which focused on fuel efficiency, low emissions, and affordability. According to a Toyota article detailing the project’s history, by 1997, Toyota engineers—including Takeshi Uchiyamada, who has since become known as the “father of the Prius”—were satisfied they had met the challenge of achieving all three goals. The first-generation Prius featured a compact design with aerodynamic efficiency. Its groundbreaking hybrid system enabled smooth transitions between an electric motor powered by a nickel–metal hydride battery and an internal combustion engine fueled by gasoline. The car’s design incorporated regenerative braking in the power-train arrangement to enhance the vehicle’s energy efficiency. Regenerative braking captures the kinetic energy typically lost as heat when conventional brake pads stop the wheels with friction. Instead, the electric motor switches over to generator mode so that the wheels drive the motor in reverse rather than the motor driving the wheels. Using the motor as a generator slows the car and converts the kinetic energy into an electrical charge routed to the battery to recharge it. “The Prius is not only the world’s first mass-produced hybrid car, but its technical and commercial success also spurred other automakers to accelerate hybrid vehicle development.” —Nobuo Kawaguchi, IEEE Nagoya Section secretary According to the company’s “Harnessing Efficiency: A Deep Dive Into Toyota’s Hybrid Technology” article, a breakthrough was the Hybrid Synergy Drive, a system that allows the Prius to operate in different modes—electric only, gasoline only, or a combination—depending on driving conditions. A key component Toyota engineers developed from scratch was the power split device, a planetary gear system that allows smooth transitions between electric and gasoline power, permitting the engine and the motor to propel the vehicle in their respective optimal performance ranges. The arrangement helps optimize fuel economy and simplifies the drivetrain by making a traditional transmission unnecessary. Setting fuel-efficiency records Nearly 30 years after its commercial debut, the Prius remains an icon of environmental responsibility combined with technical innovation. It is still setting records for fuel efficiency. When in July 2023 the newly released 2024 Prius LE was driven from Los Angeles to New York City, it consumed a miserly 2.52 liters of gasoline per 100 kilometers during the 5,150-km cross-country journey. The record was set by a so-called hypermiler, a driver who practices advanced driving techniques aimed at optimizing fuel efficiency. Hypermilers accelerate smoothly and avoid hard braking. They let off the accelerator early so the car can coast to a gradual stop without applying the brakes, and they drive as often as possible at speeds between 72 and 105 km per hour, the velocities at which a car is typically most efficient. A driver not employing such techniques still can expect fuel economy as high as 4.06 L per 100 km from the latest generation of Prius models. Toyota has advanced the Prius’s hybrid technology with each generation, solidifying the car’s role as a leader in fuel efficiency and sustainability. Milestone event attracts luminaries Uchiyamada gave a brief talk at the IEEE Milestone event about the Prius’s development process and the challenges he faced as chief G21 engineer. Other notable attendees were Takeshi Uehara, president of Toyota’s power-train company; Toshio Fukuda, 2020 IEEE president; Isao Shirakawa, IEEE Japan Council history committee chair; and Jun Sato, IEEE Nagoya Section chair. A plaque recognizing the technology is displayed at the entrance of the Toyota Technical Center, which is within walking distance of the company’s headquarters. It reads: “In 1997 Toyota Motor Corporation developed the world’s first mass-produced hybrid vehicle, the Toyota Prius, which used both an internal combustion engine and two electric motors. This vehicle achieved revolutionary fuel efficiency by recovering and reusing energy previously lost while driving. Its success helped popularize hybrid vehicles internationally, advanced the technology essential for electric power trains, contributed to the reduction of CO2 emissions, and influenced the design of subsequent electrified vehicles.” Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments worldwide. The IEEE Nagoya Section sponsored the nomination.
- Video Friday: Agile Upgradeby Evan Ackerman on 17. Januara 2025. at 16:30
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANY German Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANY RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN RSS 2025: 21–25 June 2025, LOS ANGELES IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, KOREA IFAC Symposium on Robotics: 15–18 July 2025, PARIS RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL Enjoy today's videos! Unitree rolls out frequent updates nearly every month. This time, we present to you the smoothest walking and humanoid running in the world. We hope you like it.] [ Unitree ] This is just lovely. [ Mimus CNK ] There’s a lot to like about Grain Weevil as an effective unitasking robot, but what I really appreciate here is that the control system is just a remote and a camera slapped onto the top of the bin. [ Grain Weevil ] This video, “Robot arm picking your groceries like a real person,” has taught me that I am not a real person. [ Extend Robotics ] A robot walking like a human walking like what humans think a robot walking like a robot walks like. And that was my favorite sentence of the week. [ Engineai ] For us, robots are tools to simplify life. But they should look friendly too, right? That’s why we added motorized antennas to Reachy, so it can show simple emotions—without a full personality. Plus, they match those expressive eyes O_o! [ Pollen Robotics ] So a thing that I have come to understand about ships with sails (thanks, Jack Aubrey!) is that sailing in the direction that the wind is coming from can be tricky. Turns out that having a boat with two fronts and no back makes this a lot easier. [ Paper ] from [ 2023 IEEE/ASME International Conference on Advanced Intelligent Mechatronics ] via [ IEEE Xplore ] I’m Kento Kawaharazuka from JSK Robotics Laboratory at the University of Tokyo. I’m writing to introduce our human-mimetic binaural hearing system on the musculoskeletal humanoid Musashi. The robot can perform 3D sound source localization using a human-like outer ear structure and an FPGA-based hearing system embedded within it. [ Paper ] Thanks, Kento! The third CYBATHLON took place in Zurich on 25-27 October 2024. The CYBATHLON is a competition for people with impairments using novel robotic technologies to perform activities of daily living. It was invented and initiated by Prof. Robert Riener at ETH Zurich, Switzerland. Races were held in eight disciplines including arm and leg prostheses, exoskeletons, powered wheelchairs, brain computer interfaces, robot assistance, vision assistance, and functional electrical stimulation bikes. [ Cybathlon ] Thanks, Robert! If you’re going to work on robot dogs, I’m honestly not sure whether Purina would be the most or least appropriate place to do that. [ Michigan Robotics ]
- How Antivirus Software Has Changed With the Internetby Dina Genkina on 17. Januara 2025. at 12:00
We live in a world filled with computer viruses, and antivirus software is almost as old as the Internet itself: The first version of what would become McAfee antivirus came out in 1987—just four years after the Internet booted up. For many of us, antivirus software is an annoyance, taking up computer resources and generating opaque pop-ups. But they are also necessary: Almost every computer today is protected by some kind of antivirus software, either built into the operating system or provided by a third party. Despite their ubiquity, however, not many people know how these antivirus tools are built. Paul A. Gagniuc set out to fix this apparent oversight. A professor of bioinformatics and programming languages at the University Politehnica of Bucharest, he has been interested in viruses and antivirus software since he was a child. In his book Antivirus Engines: From Methods to Innovations, Design, and Applications, published last October, he dives deep into the technical details of malware and how to fight it, all motivated by his own experience of designing an antivirus engine—a piece of software that protects a computer from malware—from scratch in the mid-2000s. IEEE Spectrum spoke with Gagniuc about his experience as a life-long computer native, antivirus basics and best practices, his view of how the world of malware and anti-virus software has changed over the last decades, the effects of cryptocurrencies, and his opinion on what the issues with fighting malware will be going forward. How did you become interested in antivirus software? Paul Gagniuc: Individuals of my age grew up with the Internet. When I was growing up, it was the wild wild West, and there were a lot of security problems. And the security field was at its very beginning, because nothing was controlled at the time. Even small children had access to very sophisticated pieces of software in open source. Knowing about malware provided a lot of power for a young man at that time, so I started to understand the codes that were available starting at the age of 12 or so. And a lot of codes were available. I wrote a lot of versions of different viruses, and I did manage to make some of my own, but not with the intent of doing harm, but for self-defense. Around 2002 I started to think of different strategies to detect malware. And between 2006 and 2008 I started to develop an antivirus engine, called Scut Antivirus. I tried to make a business based on this antivirus, however, the business side and programming side are two separate things. I was the programmer. I was the guy that made the software framework, but the business side wasn’t that great, because I didn’t know anything about business. What was different about Scut Antivirus than the existing solution from a technical perspective? Gagniuc: The speed, and the amount of resources it consumed. It was almost invisible to the user, unlike the antiviruses of the time. Many users at time started to avoid antiviruses for this reason, because at one point, the antivirus consumed so many resources that the user could not do their work. How does antivirus software work? Gagniuc: How can we detect a particular virus? Well, we take a little piece of the code from that virus, and we put that code inside an antivirus database. But what do we do when we have 1 million, 2 million different malware files, which are all different? So what happens is that malware from two years, three years ago, for instance, is removed from the database, because that those files are not a danger to the community anymore, and what is kept in the database are just the new threats. And, there’s an algorithm that’s described in my book called the Aho-Corasick algorithm. It’s a very special algorithm that allows one to check millions of viruses’ signatures against one suspected file. It was made in the 70s, and it is extremely fast. “Once Bitcoin appeared, every type of malware out there transformed itself into ransomware.” —Paul Gagniuc, University Polytehnica of Bucharest This is the basis of classical antivirus software. Now, people are using artificial intelligence to see how useful it can be, and I’m sure it can be, because at root the problem is pattern recognition. But there are also malware files that can change their own code, called polymorphic malware, which are very hard to detect. Where do you get a database of viruses to check for? Gagniuc: When I was working on Scut Antivirus, I had some help from some hackers from Ukraine, who allowed me to have a big database, a big malware bank. It’s an archive which has several millions of infected files with different types of malware. At that time, VirusTotal was becoming more and more known in in the security world. Before it was bought by Google [in 2012], VirusTotal was the place where all the security companies started to verify files. So if we had a suspected file, we uploaded to VirusTotal. “I’m scared of a loss of know-how, and not only for antivirus, but for technology in general.” —Paul Gagniuc, University Polytehnica of Bucharest This was a very interesting system, because it allowed for quick verification of a suspicious file. But this also had some consequences. What happened was that every security company started to believe what they see in the results of VirusTotal. So that did lead to a loss of diversity in the in different laboratories, from Kaspersky to Norton. How has malware changed during the time you’ve been involved in the field? Gagniuc: There are two different periods, namely the period up to 2009, and the period after that. The security world splits when Bitcoin appears. Before Bitcoin, we had viruses, we had the Trojan horses, we had worms, we had different types of spiral key logs. We had everything. The diversity was high. Each of these types of malware had a specific purpose, but nothing was linked to the real life. Ransomware existed, but at the time it was mainly playful. Why? Because in order to have ransomware, you have to be able to oblige the user to pay you, and in order to pay, you have to make contact with a bank. And when you make the contact with a bank, you have to have an ID. Once Bitcoin appeared, every type of malware out there transformed itself into ransomware. Once a user can pay by using Bitcoin or other cryptocurrency, then you don’t have any control over the identity of the hacker. Where do you see the future of antiviruses going? Gagniuc: It’s hard to say what the future will bring, but it’s indispensable. You cannot live without a security system. Antiviruses are here to stay. Of course, a lot of trials will be made by using artificial intelligence. But I’m scared of a loss of know-how, and not only for antivirus, but for technology in general. In my view, something happened in the education of young people about 2008, where they became less apt in working with the assembler. Today, at my university in Bucharest, I see that every engineering student knows one thing and only one thing: Python. And Python uses a virtual machine, like Java, it’s a combination between what in the past was called a scripting language and a programming language. You cannot do with it what you could do with C++, for instance. So at the worldwide level, there was a de-professionalization of young people, whereas in the past, in my time, everyone was advanced. You couldn’t work with a computer without being very advanced. Big leaders of our companies in this globalized system must take into consideration the possibility of loss of knowledge. Did you write the book partially an effort to fix this lack of know-how? Gagniuc: Yes. Basically, this loss of knowledge can be avoided if everybody brings their own experience into the publishing world. Because even if I don’t write that book for humans, although I’m sure that many humans are interested in the book, at least it will be known by artificial intelligence. That’s the reality.
- Asimov's Laws of Robotics Need an Update for AIby Dariusz Jemielniak on 14. Januara 2025. at 15:00
In 1942, the legendary science fiction author Isaac Asimov introduced his Three Laws of Robotics in his short story “Runaround.” The laws were later popularized in his seminal story collection I, Robot. First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law: A robot must obey orders given it by human beings except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. While drawn from works of fiction, these laws have shaped discussions of robot ethics for decades. And as AI systems—which can be considered virtual robots—have become more sophisticated and pervasive, some technologists have found Asimov’s framework useful for considering the potential safeguards needed for AI that interacts with humans. But the existing three laws are not enough. Today, we are entering an era of unprecedented human-AI collaboration that Asimov could hardly have envisioned. The rapid advancement of generative AI capabilities, particularly in language and image generation, has created challenges beyond Asimov’s original concerns about physical harm and obedience. Deepfakes, Misinformation, and Scams The proliferation of AI-enabled deception is particularly concerning. According to the FBI’s 2024 Internet Crime Report, cybercrime involving digital manipulation and social engineering resulted in losses exceeding US $10.3 billion. The European Union Agency for Cybersecurity’s 2023 Threat Landscape specifically highlighted deepfakes—synthetic media that appears genuine—as an emerging threat to digital identity and trust. Social media misinformation is spreading like wildfire. I studied it during the pandemic extensively and can only say that the proliferation of generative AI tools has made its detection increasingly difficult. To make matters worse, AI-generated articles are just as persuasive or even more persuasive than traditional propaganda, and using AI to create convincing content requires very little effort. Deepfakes are on the rise throughout society. Botnets can use AI-generated text, speech, and video to create false perceptions of widespread support for any political issue. Bots are now capable of making and receiving phone calls while impersonating people. AI scam calls imitating familiar voices are increasingly common, and any day now, we can expect a boom in video call scams based on AI-rendered overlay avatars, allowing scammers to impersonate loved ones and target the most vulnerable populations. Anecdotally, my very own father was surprised when he saw a video of me speaking fluent Spanish, as he knew that I’m a proud beginner in this language (400 days strong on Duolingo!). Suffice it to say that the video was AI-edited. Even more alarmingly, children and teenagers are forming emotional attachments to AI agents, and are sometimes unable to distinguish between interactions with real friends and bots online. Already, there have been suicides attributed to interactions with AI chatbots. In his 2019 book Human Compatible, the eminent computer scientist Stuart Russell argues that AI systems’ ability to deceive humans represents a fundamental challenge to social trust. This concern is reflected in recent policy initiatives, most notably the European Union’s AI Act, which includes provisions requiring transparency in AI interactions and transparent disclosure of AI-generated content. In Asimov’s time, people couldn’t have imagined how artificial agents could use online communication tools and avatars to deceive humans. Therefore, we must make an addition to Asimov’s laws. Fourth Law: A robot or AI must not deceive a human by impersonating a human being. The Way Toward Trusted AI We need clear boundaries. While human-AI collaboration can be constructive, AI deception undermines trust and leads to wasted time, emotional distress, and misuse of resources. Artificial agents must identify themselves to ensure our interactions with them are transparent and productive. AI-generated content should be clearly marked unless it has been significantly edited and adapted by a human. Implementation of this Fourth Law would require: Mandatory AI disclosure in direct interactions, Clear labeling of AI-generated content, Technical standards for AI identification, Legal frameworks for enforcement, Educational initiatives to improve AI literacy. Of course, all this is easier said than done. Enormous research efforts are already underway to find reliable ways to watermark or detect AI-generated text, audio, images, and videos. Creating the transparency I’m calling for is far from a solved problem. But the future of human-AI collaboration depends on maintaining clear distinctions between human and artificial agents. As noted in the IEEE’s 2022 “Ethically Aligned Design“ framework, transparency in AI systems is fundamental to building public trust and ensuring the responsible development of artificial intelligence. Asimov’s complex stories showed that even robots that tried to follow the rules often discovered the unintended consequences of their actions. Still, having AI systems that are trying to follow Asimov’s ethical guidelines would be a very good start.
- Be the Key Influencer in Your Careerby Tariq Samad on 13. Januara 2025. at 19:00
This article is part of our exclusive career advice series in partnership with the IEEE Technology and Engineering Management Society. When thinking about influencers, you might initially consider people with a large social media following who have the power to affect people with an interest in fashion, fitness, or food. However, the people closest to you can influence the success you have in the early days of your career in ways that affect your professional journey. These influencers include you, your management, colleagues, and family. Take control your career You are—or should be—the most prominent influencer of your career. Fortunately, you’re the one you have the most control over. Your ability to solve engineering problems is a significant determining factor in your career growth. The tech world is constantly evolving, so you need to stay on top of the latest developments in your specialization. You also should make it a priority to learn about related technical fields, as it can help you understand more and advance faster. Another trait that can influence your career trajectory is your personality. How comfortable are you with facing awkward or difficult situations? What is your willingness to accept levels of risk when making commitments? What is your communication style with your peers and management? Do you prefer routine or challenging assignments? How interested are you in working with people from different backgrounds and cultures? Do you prefer to work on your own or as part of a team? Most of those questions don’t have right or wrong answers, but how you respond to them can help you chart your path. At the same time, be cognizant of the impression you make on others. How would you like them to think of you? How you present yourself is important, and it’s within your control. Lead with confidence about your abilities, but don’t be afraid to seek help or ask questions to learn more. You want to be confident in yourself, but if you can’t ask for help or acknowledge when you’re wrong, you’ll struggle to form good relationships with your colleagues and management. Learn about your company’s leadership Your immediate supervisor, manager, and company leaders can impact your career. Much depends on your willingness to demonstrate initiative, accept challenging work, and be dedicated to the team. Don’t forget that it is a job, however, and you will not stay in your first role forever. Develop a good business relationship with your manager while recognizing the power dynamic. Learn to communicate with the manager; what works for one leader might not work for another. Like all of us, managers have their idiosyncrasies. Accept theirs and be aware of your own. If your supervisor makes unachievable performance demands, don’t immediately consider it a red flag. Such stretch assignments can be growth opportunities, provided an environment of trust exists. But beware of bosses who become possessive and prevent you from accepting other opportunities within the organization rather than viewing you as the organization’s investment in talent. Make it a priority to learn about your company’s leadership. How does the business work? What are the top priorities and values for the company, and why? Find out the goals of the organization and your department. Learn how budgets are allocated and adjusted. Understand how the engineering and technology departments work with the marketing department, system integration, manufacturing, and other groups. Companies differ in structure, business models, industry sectors, financial health, and many other aspects. The insight you gain from your managers is valuable to you, both in your current organization and with future employers. Form strong relationships with coworkers Take the time to understand your colleagues, who probably face similar issues. Try to learn something about the people you spend most of your day with attempting to solve technical problems. What do you have in common? How do your skills complement each other? You also should develop social connections with your colleagues—which can enrich your after-work life and help you bond over job-related issues. As a young professional, you might not fully understand the industry in which your employer operates. A strong collaborative relationship with more experienced colleagues can help you learn about customer needs, available products and services, competitors, market share, regulations, and technical standards. By becoming more aware of your industry, you might even come up with ideas for new offerings and find ways to develop your skills. Family ties are important You’re responsible for your career, but the happiness and well-being of those close to you should be part of the calculus of your life. Individual circumstances related to family—a partner’s job, say, or parents’ health or children’s needs—can influence your professional decisions. Your own health and career trajectory are also part of the whole. Remember: Your career is part of your life, not the entire thing. Find a way to balance your career, life, and family. Planning your next steps As engineers and technologists, our work is not just a means to earn a living but also a source of fulfillment, social connections, and intellectual challenge. Where would you like to be professionally in 5, 10, or 15 years? Do you see yourself as an expert in key technical areas leading large and impactful programs? A manager or senior executive? An entrepreneur? If you haven’t articulated your objectives and preferences, that’s fine. You’re early in your career, and it’s normal to be figuring out what you want. But if so, you should think about what you need to learn before planning for your next steps. Whatever your path forward, you can benefit from your career influencers—the people who challenge you, teach you, and cause you to think about what you want.
- AI Mistakes Are Very Different Than Human Mistakesby Nathan E. Sanders on 13. Januara 2025. at 13:00
Humans make mistakes all the time. All of us do, every day, in tasks both new and routine. Some of our mistakes are minor and some are catastrophic. Mistakes can break trust with our friends, lose the confidence of our bosses, and sometimes be the difference between life and death. Over the millennia, we have created security systems to deal with the sorts of mistakes humans commonly make. These days, casinos rotate their dealers regularly, because they make mistakes if they do the same task for too long. Hospital personnel write on limbs before surgery so that doctors operate on the correct body part, and they count surgical instruments to make sure none were left inside the body. From copyediting to double-entry bookkeeping to appellate courts, we humans have gotten really good at correcting human mistakes. Humanity is now rapidly integrating a wholly different kind of mistake-maker into society: AI. Technologies like large language models (LLMs) can perform many cognitive tasks traditionally fulfilled by humans, but they make plenty of mistakes. It seems ridiculous when chatbots tell you to eat rocks or add glue to pizza. But it’s not the frequency or severity of AI systems’ mistakes that differentiates them from human mistakes. It’s their weirdness. AI systems do not make mistakes in the same ways that humans do. Much of the friction—and risk—associated with our use of AI arise from that difference. We need to invent new security systems that adapt to these differences and prevent harm from AI mistakes. Human Mistakes vs AI Mistakes Life experience makes it fairly easy for each of us to guess when and where humans will make mistakes. Human errors tend to come at the edges of someone’s knowledge: Most of us would make mistakes solving calculus problems. We expect human mistakes to be clustered: A single calculus mistake is likely to be accompanied by others. We expect mistakes to wax and wane, predictably depending on factors such as fatigue and distraction. And mistakes are often accompanied by ignorance: Someone who makes calculus mistakes is also likely to respond “I don’t know” to calculus-related questions. To the extent that AI systems make these human-like mistakes, we can bring all of our mistake-correcting systems to bear on their output. But the current crop of AI models—particularly LLMs—make mistakes differently. AI errors come at seemingly random times, without any clustering around particular topics. LLM mistakes tend to be more evenly distributed through the knowledge space. A model might be equally likely to make a mistake on a calculus question as it is to propose that cabbages eat goats. And AI mistakes aren’t accompanied by ignorance. A LLM will be just as confident when saying something completely wrong—and obviously so, to a human—as it will be when saying something true. The seemingly random inconsistency of LLMs makes it hard to trust their reasoning in complex, multi-step problems. If you want to use an AI model to help with a business problem, it’s not enough to see that it understands what factors make a product profitable; you need to be sure it won’t forget what money is. How to Deal with AI Mistakes This situation indicates two possible areas of research. The first is to engineer LLMs that make more human-like mistakes. The second is to build new mistake-correcting systems that deal with the specific sorts of mistakes that LLMs tend to make. We already have some tools to lead LLMs to act in more human-like ways. Many of these arise from the field of “alignment” research, which aims to make models act in accordance with the goals and motivations of their human developers. One example is the technique that was arguably responsible for the breakthrough success of ChatGPT: reinforcement learning with human feedback. In this method, an AI model is (figuratively) rewarded for producing responses that get a thumbs-up from human evaluators. Similar approaches could be used to induce AI systems to make more human-like mistakes, particularly by penalizing them more for mistakes that are less intelligible. When it comes to catching AI mistakes, some of the systems that we use to prevent human mistakes will help. To an extent, forcing LLMs to double-check their own work can help prevent errors. But LLMs can also confabulate seemingly plausible, but truly ridiculous, explanations for their flights from reason. Other mistake mitigation systems for AI are unlike anything we use for humans. Because machines can’t get fatigued or frustrated in the way that humans do, it can help to ask an LLM the same question repeatedly in slightly different ways and then synthesize its multiple responses. Humans won’t put up with that kind of annoying repetition, but machines will. Understanding Similarities and Differences Researchers are still struggling to understand where LLM mistakes diverge from human ones. Some of the weirdness of AI is actually more human-like than it first appears. Small changes to a query to an LLM can result in wildly different responses, a problem known as prompt sensitivity. But, as any survey researcher can tell you, humans behave this way, too. The phrasing of a question in an opinion poll can have drastic impacts on the answers. LLMs also seem to have a bias towards repeating the words that were most common in their training data; for example, guessing familiar place names like “America” even when asked about more exotic locations. Perhaps this is an example of the human “availability heuristic” manifesting in LLMs, with machines spitting out the first thing that comes to mind rather than reasoning through the question. And like humans, perhaps, some LLMs seem to get distracted in the middle of long documents; they’re better able to remember facts from the beginning and end. There is already progress on improving this error mode, as researchers have found that LLMs trained on more examples of retrieving information from long texts seem to do better at retrieving information uniformly. In some cases, what’s bizarre about LLMs is that they act more like humans than we think they should. For example, some researchers have tested the hypothesis that LLMs perform better when offered a cash reward or threatened with death. It also turns out that some of the best ways to “jailbreak” LLMs (getting them to disobey their creators’ explicit instructions) look a lot like the kinds of social engineering tricks that humans use on each other: for example, pretending to be someone else or saying that the request is just a joke. But other effective jailbreaking techniques are things no human would ever fall for. One group found that if they used ASCII art (constructions of symbols that look like words or pictures) to pose dangerous questions, like how to build a bomb, the LLM would answer them willingly. Humans may occasionally make seemingly random, incomprehensible, and inconsistent mistakes, but such occurrences are rare and often indicative of more serious problems. We also tend not to put people exhibiting these behaviors in decision-making positions. Likewise, we should confine AI decision-making systems to applications that suit their actual abilities—while keeping the potential ramifications of their mistakes firmly in mind.
- IEEE Offers New Credential to Address Tech Skills Gapby Jennifer Fong on 11. Januara 2025. at 14:00
Analysts predict that demand for engineers will skyrocket during the next decade, and that the supply will fall substantially short. A Comptia report about the tech workforce estimates that there will be an additional 7.1 million tech jobs in the United States by 2034. Yet nearly one in three engineering jobs will go unfilled each year through 2030, according to a report from the Boston Consulting Group and SAE International. Ongoing tech investment programs such as the 2022 U.S. CHIPS and Science Act seek to build a strong technical workforce. The reality, however, is that the workforce pipeline is leaking badly. The BCG-SAE report found that only 13 percent of students who express initial interest in engineering and technical careers ultimately choose that career path. The statistics are even worse among women. Of the women who graduated with an engineering degree from 2006 to 2010, only 27 percent were still working in the field in 2021, compared with 41 percent of men with the same degree. To help address the significant labor gap, companies are considering alternative educational pathways to technical jobs. The businesses realize that some technician roles might not actually require a college degree. Ways to develop needed skills outside of traditional schooling—such as apprenticeships, vocational programs, professional certifications, and online courses—could help fill the workforce pipeline. When taking those alternative pathways, though, students need a way to demonstrate they have acquired the skills employers are seeking. One way is through skills-based microcredentials. IEEE is the world’s largest technical professional organization, with decades of experience offering industry-relevant credentials as well as expertise in global standardization. As the tech industry looks for a meaningful credential to help ease the semiconductor labor shortage, IEEE has the credibility and infrastructure to offer a meaningful, standardized microcredentialing program that meets semiconductor industry needs and creates opportunities for people who have traditionally been underrepresented in technical fields. The IEEE Credentialing Program is now offering skills-based microcredentials for training courses. Earning credentials while acquiring skills Microcredentials are issued when learners prove mastery of a specific skill. Unlike more traditional university degrees and course certificates, microcredential programs are not based on successfully completing a full learning program. Rather, a student might earn multiple microcredentials in a single program based on the skills demonstrated. A qualified instructor using an assessment instrument determines that the learner has acquired the skill and earned the credential. Mastery of skills might be determined through observation, completion of a task, or a written test. In a technician training course held in a clean-room setting, for example, an instructor might use an observation checklist that rates each student’s ability to demonstrate adherence to safety procedures. During the assessment, the students complete the steps while the instructor observes. Upon successful completion of each step, a student would earn a microcredential for that skill. Microcredentials are stackable; a student can earn them from different programs and institutions to demonstrate their growing skill set. Students can carry their earned credentials in a digital “wallet” for easy access. The IEEE Learning Technology Standards Committee is working on a recommended practice standard to help facilitate the portability of such records. Microcredentials differ from professional credentials When considering microcredentials, it is important to understand where they fall in the wider scope of credentials available through learning programs. The credentials commonly earned can be placed along a spectrum, from easy accessibility and low personal investment to restricted accessibility and high investment. Microcredentials are among the most accessible alternative educational pathways, but they are in need of standardization. The most formal credentials are degrees issued by universities and colleges. They have a strict set of criteria associated with them, and they often are accredited by a third party, such as ABET in the United States. The degrees typically require a significant investment of time and money, and they are required for some professional roles as well as for advanced studies. Certifications require specialized training on a formal body of knowledge, and students need to pass an exam to prove mastery of the subject. A learner seeking such a credential typically pays both for the learning and the test. Some employers accept certifications for certain types of roles, particularly in IT fields. A cybersecurity professional might earn a Computing Technology Industry Association Security+ certification, for example. CompTIA is a nonprofit trade association that issues certifications for the IT industry. Individual training courses are farther down the spectrum. Typically, a learner receives a certificate upon successful completion of an individual training course. After completing a series of courses in a program, students might receive a digital badge, which comes with associated metadata about the program that can be shared on professional networks and CVs. The credentials often are associated with continuing professional education programs. Microcredentials are at the end of the accessibility spectrum. Tied to a demonstrated mastery of skills, they are driven by assessments, rather than completion of a formal learning program or number of hours of instruction. This key difference can make them the most accessible type of credential, and one that can help a job seeker pursue alternative routes to employment beyond a formal degree or certification. Standardization of microcredentials A number of educational institutions and training providers offer microcredentials. Different providers have different criteria when issuing microcredentials, though, making them less useful to industry. Some academic institutions, for example, consider anything less than a university degree to be a microcredential. Other training providers offer microcredentials for completing a series of courses. There are other types of credentials that work for such scenarios, however. By ensuring that microcredentials are tied to skills alone, IEEE can provide a useful differentiation that benefits both job seekers and employers. Microcredentials for clean-room training IEEE is working to standardize the definition of microcredentials and what is required to issue them. By serving as a centralized source and drawing on more than 30 years of experience in issuing professional credentials, IEEE can help microcredential providers offer a credit that is recognized by—and meaningful to—industry. That, in turn, can help job seekers increase their career options as they build proof of the skills they’ve developed. Last year IEEE collaborated with the University of Southern California, in Los Angeles, and the California DREAMS Microelectronics Hub on a microcredentialing program. USC offered a two-week Cleanroom Gateway pilot program to help adult learners who were not currently enrolled in a USC degree program learn the fundamentals of working in a semiconductor fabrication clean room. The school wanted to provide them with a credential that would be recognized by semiconductor companies and help improve their technician-level job prospects. USC contacted IEEE to discuss credentialing opportunities. Together, the two organizations identified key industry-relevant skills that were taught in the program, as well as the assessment instruments needed to determine if learners master the skills. IEEE issued microcredentials for each skill mastered, along with a certificate and professional development hours for completing the entire program. The credentials, which now can be included on student CVs and LinkedIn profiles, are a good way for the students to show employers that they have the skills to work as a clean-room technician. How the IEEE program works IEEE’s credentialing program allows technical learning providers to supply credentials that bear the IEEE logo. Because IEEE is well respected in its fields of interest, its credentials are recognized by employers, who understand that the learning programs issuing them have been reviewed and approved. Credentials that can be issued through the IEEE Credentialing Program include certificates, digital badges, and microcredentials. Training providers that want to offer standardized microcredentials can apply to the IEEE Credentialing Program to become approved. Applications are reviewed by a committee to ensure that the provider is credible, offers training in IEEE’s fields of interest, has qualified instructors, and has well-defined assessments. Once a provider is approved, IEEE will work with it on the credentialing needs for each course offered, including the selection of skills to be recognized, designing the microcredentials, and creating a credential-issuing process. Upon successful completion of the program by learners, IEEE will issue the microcredentials on behalf of the training provider. You can learn more about offering IEEE microcredentials here.
- Video Friday: Arms on Vacuumsby Evan Ackerman on 10. Januara 2025. at 17:00
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANY German Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANY RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN RSS 2025: 21–25 June 2025, LOS ANGELES IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, KOREA IFAC Symposium on Robotics: 15–18 July 2025, PARIS RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL Enjoy today’s videos! I’m not totally sure yet about the utility of having a small arm on a robot vacuum, but I love that this is a real thing. At least, it is at CES this year. [ Roborock ] We posted about SwitchBot’s new modular home robot system earlier this week, but here’s a new video showing some potentially useful hardware combinations. [ SwitchBot ] Yes, it’s in sim, but (and this is a relatively new thing) I will not be shocked to see this happen on Unitree’s hardware in the near future. [ Unitree ] With ongoing advancements in system engineering, LimX Dynamics’ full-size humanoid robot features a hollow actuator design and high torque-density actuators, enabling full-body balance for a wide range of motion. Now it achieves complex full-body movements in a ultra stable and dynamic manner. [ LimX Dynamics ] We’ve seen hybrid quadrotor bipeds before, but this one , which is imitating the hopping behavior of Jacana birds, is pretty cute. What’s a Jacana bird, you ask? It’s these things, which surely must have the most extreme foot to body ratio of any bird: Also, much respect to the researchers for confidently titling this supplementary video “An Extremely Elegant Jump.” [ SSRN Paper preprint ] Twelve minutes flat from suitcase to mobile manipulator. Not bad! [ Pollen Robotics ] Happy New Year from Dusty Robotics! [ Dusty Robotics ]
- This Pool Robot Is the First With Ultrasonic Mappingby Evan Ackerman on 10. Januara 2025. at 13:00
Back in the day, the defining characteristic of home-cleaning robots was that they’d randomly bounce around your floor as part of their cleaning process, because the technology required to localize and map an area hadn’t yet trickled down to the consumer space. That all changed in 2010, when home robots started using lidar (and other things) to track their location and optimize how they cleaned. Consumer pool-cleaning robots are lagging about 15 years behind indoor robots on this, for a couple of reasons. First, most pool robots—different from automatic pool cleaners, which are purely mechanical systems that are driven by water pressure—have been tethered to an outlet for power, meaning that maximizing efficiency is less of a concern. And second, 3D underwater localization is a much different (and arguably more difficult) problem to solve than 2D indoor localization was. But pool robots are catching up, and at CES this week, Wybot introduced an untethered robot that uses ultrasound to generate a 3D map for fast, efficient pool cleaning. And it’s solar powered and self-emptying, too. Underwater localization and navigation is not an easy problem for any robot. Private pools are certainly privileged to be operating environments with a reasonable amount of structure and predictability, at least if everything is working the way it should. But the lighting is always going to be a challenge, between bright sunlight, deep shadow, wave reflections, and occasionally murky water if the pool chemicals aren’t balanced very well. That makes relying on any light-based localization system iffy at best, and so Wybot has gone old-school, with ultrasound. Wybot Brings Ultrasound Back to Bots Ultrasound used to be a very common way for mobile robots to navigate. You may (or may not) remember venerable robots like the Pioneer 3, with those big ultrasonic sensors across its front. As cameras and lidar got cheap and reliable, the messiness of ultrasonic sensors fell out of favor, but sound is still ideal for underwater applications where anything that relies on light may struggle. The Wybot S3 uses 12 ultrasonic sensors, plus motor encoders and an inertial measurement unit to map residential pools in three dimensions. “We had to choose the ultrasonic sensors very carefully,” explains Felix (Huo) Feng, the chief technology officer of Wybot. “Actually, we use multiple different sensors, and we compute time of flight [of the sonar pulses] to calculate distance.” The positional accuracy of the resulting map is about 10 centimeters, which is totally fine for the robot to get its job done, although Feng says that they’re actively working to improve the map’s resolution. For path-planning purposes, the 3D map gets deconstructed into a series of 2D maps, since the robot needs to clean the bottom of the pool, stairs, and ledges, and also the sides of the pool. Efficiency is particularly important for the S3 because its charging dock has enough solar panels on the top of it to provide about 90 minutes of run time for the robot over the course of an optimally sunny day. If your pool isn’t too big, that means the robot can clean it daily without requiring a power connection to the dock. The dock also sucks debris out of the collection bin on the robot itself, and Wybot suggests that the S3 can go for up to a month of cleaning without the dock overflowing. The S3 has a camera on the front, which is used primarily to identify and prioritize dirtier areas (through AI, of course) that need focused cleaning. At some point in the future, Wybot may be able to use vision for navigation too, but my guess is that for reliable 24/7 navigation, ultrasound will still be necessary. One other interesting little tidbit is the communication system. The dock can talk to your Wi-Fi, of course, and then talk to the robot while it’s charging. Once the robot goes off for a swim, however, traditional wireless signals won’t work, but the dock has its own sonar that can talk to the robot at several bytes per second. This isn’t going to get you streaming video from the robot’s camera, but it’s enough to let you steer the robot if you want, or ask it to come back to the dock, get battery status updates, and similar sorts of things. The Wybot S3 will go on sale in Q2 of this year for a staggering US $2,999, but that’s how it always works: The first time a new technology shows up in the consumer space, it’s inevitably at a premium. Give it time, though, and my guess is that the ability to navigate and self-empty will become standard features in pool robots. But as far as I know, Wybot got there first.
- Meet the Candidates Running for 2026 IEEE President-Electby Joanna Goodrich on 9. Januara 2025. at 19:00
The IEEE Board of Directors has nominated IEEE Senior Members Jill I. Gostin and David Alan Koehler as candidates for 2026 IEEE president-elect. IEEE Life Fellow Manfred “Fred” J. Schindler is seeking to be a petition candidate. The winner of this year’s election will serve as IEEE president in 2027. For more information about the election, president-elect candidates, and the petition process, visit the IEEE election website. IEEE Senior Member Jill I. Gostin Nominated by the IEEE Board of Directors Sean McNeil/Georgia Tech Research Institute Gostin is a principal research scientist at the Georgia Tech Research Institute in Atlanta, focusing on algorithms and developing and testing software for sensor systems. She is the systems engineering, integration, and test lead in the software engineering and architecture division. She has managed large technical programs and led research collaborations among academia, government, and industry. Her papers have been published in multiple conference proceedings. Her presentation on fractal geometry applications was selected as Best Paper at the National Telesystems Conference and was published in IEEE Aerospace and Electronic Systems Magazine. Gostin has held several IEEE leadership positions including vice president, IEEE Member and Geographic Activities and Region 3 director. She is a former chair of the IEEE Atlanta Section and of the IEEE Computer Society’s Atlanta chapter. She served on the IEEE Computer and IEEE Aerospace and Electronic Systems societies’ boards of governors and has led or been a member of several IEEE organizational units and committees, locally and globally. In 2016 the Georgia Women in Technology named Gostin its Woman of the Year, an award that recognizes technology executives for their accomplishments as business leaders, technology visionaries, and impact makers in their community. IEEE Senior Member David Alan Koehler Nominated by the IEEE Board of Directors IEEE Koehler is a business development manager for Doble Engineering Co. in Marlborough, Mass. Doble, which manufactures diagnostic testing equipment and software, provides engineering services for utilities, service companies, and OEMs worldwide. More than 100 years old, the company is a leader in the power and energy sector. Koehler has 20 years of experience in testing insulating liquids and managing analytical laboratories. He has presented his work at technical conferences and published articles in technical publications related to the power industry. An active volunteer, he has served in every geographical unit within IEEE. His first leadership position was treasurer of the Central Indiana Section in 2010. He served as 2022 vice president of IEEE Member and Geographic Activities, 2019–2020 director of IEEE Region 4, and 2024 chair of the IEEE Board of Directors Ad Hoc Committee on Leadership Continuity and Efficiency. He served on the IEEE Board of Directors for three different years. He has been a member of IEEE-USA, Member and Geographic Activities, and Publication Services and Products boards. He received his bachelor’s degree in chemistry and his master of business administration from Indiana University in Bloomington. IEEE Life Fellow Manfred “Fred” J. Schindler Seeking petition candidacy Tammy Lyle Schindler, an expert in microwave semiconductor circuits and technology, is an independent consultant supporting clients with technical expertise, due diligence, and project management. Throughout his career, he led the development of gallium arsenide monolithic microwave integrated-circuit technology, from lab demonstrations to the production of high-volume commercial products. He has numerous technical publications and 11 patents. He previously served as CTO of Anlotek and director of Qorvo and RFMD’s Boston design center. He was applications manager of IBM’s microelectronics wireless products group, engineering manager at ATN Microwave, and manager of Raytheon’s microwave circuits research laboratory. An IEEE volunteer for more than 30 years, he served as the 2024 vice president of IEEE Technical Activities and the 2022–2023 Division IV director. He was chair of the IEEE Conferences Committee from 2015 to 2018 and president of the IEEE Microwave Theory and Technology Society in 2003. He founded the IEEE MTTS Radio Wireless Symposium in 2006 and was general chair of the 2009 International Microwave Symposium. The IEEE–Eta Kappa Nu member received the 2018 IEEE MTTS Distinguished Service Award. He has been writing an award-winning column focused on business for IEEE Microwave Magazine since 2011. To sign Schindler’s petition, click here.
- Tragedy Spurred the First Effective Land-Mine Detectorby Joanna Goodrich on 8. Januara 2025. at 13:00
Land mines have been around in one form or another for more than a thousand years. By now, you’d think a simple and safe way of locating and removing the devices would’ve been engineered. But that’s not the case. In fact, up until World War II, the most common method for finding the explosives was to prod the ground with a pointed stick or bayonet. The hockey-puck-size devices were buried about 15 centimeters below the ground. When someone stepped on the ground above or near the mine, their weight triggered a pressure sensor and caused the device to explode. So mine clearing was nearly as dangerous as just walking through a minefield unawares. During World War II, land mines were widely used by both Axis and Allied forces and were responsible for the deaths of 375,000 soldiers, according to the Warfare History Network. In 1941 Józef Stanislaw Kosacki, a Polish signals officer who had escaped to the United Kingdom, developed the first portable device to effectively detect a land mine without inadvertently triggering it. It proved to be twice as fast as previous mine-detection methods, and was soon in wide use by the British and their allies. The Engineer Behind the Portable Mine Detector Before inventing his mine detector, Kosacki worked as an engineer and had developed tools to detect explosives for the Polish Armed Forces. After receiving a bachelor’s degree in electrical engineering from the Warsaw University of Technology in 1933, Kosacki completed his year-long mandatory service with the army. He then joined the National Telecommunications Institute in Warsaw as a manager. Then, as now, the agency led the country’s R&D in telecommunications and information technologies. In 1937 Kosacki was commissioned by the Polish Ministry of National Defence to develop a machine that could detect unexploded grenades and shells. He completed his machine, but it was never used in the field. Polish engineer Józef Kosacki’s portable land-mine detector saved thousands of soldiers’ lives in World War II. Military Historical Office When Germany invaded Poland in September 1939, Kosacki returned to active duty. Because of his background in electrical engineering, he was placed in a special communications unit that was responsible for the upkeep of the Warszawa II radio station. But that duty lasted only until the radio towers were destroyed by the German Army a month after the invasion. With Warsaw under German occupation, Kosacki and his unit were captured and taken to an internment camp in Hungary. In December 1939, he escaped and eventually found his way to the United Kingdom. There he joined other Polish soldiers in the 1st Polish Army Corps, stationed in St. Andrews, Scotland. He trained soldiers in the use of wireless telegraphy to send messages in Morse code. Then tragedy struck. Tragedy Inspired Engineering Ingenuity The invention of the portable mine detector came about after a terrible accident on the beaches of Dundee, Scotland. In 1940, the British Army, fearing a German invasion, buried thousands of land mines along the coast. But they didn’t notify their allies. Soldiers from the Polish 10th Armored Cavalry Brigade on a routine patrol of the beach were killed or injured when the land mines exploded. This event prompted the British Army to launch a contest to develop an effective land-mine detector. Each entrant had to pass a simple test: Detect a handful of coins scattered on the beach. Kosacki and his assistant spent three months refining Kosacki’s earlier grenade detector. During the competition, their new detector located all of the coins, beating the other six devices entered. There’s some murkiness about the detector’s exact circuitry, as befits a technology developed under wartime security, but our best understanding is this: The tool consisted of a bamboo pole with an oval-shaped wooden panel at one end that held two coils—one transmitting and one receiving, according to a 2015 article in Aerospace Research in Bulgaria. The soldier held the detector by the pole and passed the wooden panel over the ground. A wooden backpack encased a battery unit, an acoustic-frequency oscillator, and an amplifier. The transmitting coil was connected to the oscillator, which generated current at an acoustic frequency, writes Mike Croll in his book The History of Landmines. The receiving coil was connected to the amplifier, which was then linked to a pair of headphones. The detector weighed less than 14 kilograms and operated much like the metal detectors used by beachcombers today. Michał Bojara/National Museum of Technology in Warsaw When the panel came close to a metallic object, the induction balance between the two coils was disturbed. Via the amplifier, the receiving coil sent an audio signal to the headphones, notifying the soldier of a potential land mine. The equipment weighed just under 14 kilograms and could be operated by one soldier, according to Croll. Kosacki didn’t patent his technology and instead gave the British Army access to the device’s schematics. The only recognition he received at the time was a letter from King George VI thanking him for his service. Detectors were quickly manufactured and shipped to North Africa, where German commander Erwin Rommel had ordered his troops to build a defensive network of land mines and barbed wire that he called the Devil’s Gardens. The minefields stretched from the Mediterranean in northern Egypt to the Qattara Depression in western Egypt and contained an estimated 16 million mines over 2,900 square kilometers. Kosacki’s detectors were first used in the Second Battle of El Alamein, in Egypt, in October and November of 1942. British soldiers used the device to scour the minefield for explosives. Scorpion tanks followed the soldiers; heavy chains mounted on the front flailed the ground and exploded the mines as the tank moved forward. Kosacki’s mine detector doubled the speed at which such heavily mined areas could be cleared, from 100 to 200 square meters an hour. By the end of the war, his invention had saved thousands of lives. Kosacki’s land-mine detector was first used in Egypt, to help clear a massive minefield laid by the Germans. The basic technology continued to be used until 1991.National Army Museum The basic design with minor modifications continued to be used by Canada, the United Kingdom, and the United States until the end of the First Gulf War in 1991. By then, engineers had developed more sensitive portable detectors, as well as remote-controlled mine-clearing systems. Kosacki wasn’t publicly recognized for his work until after World War II, to prevent retribution against his family in German-occupied Poland. When Kosacki returned to Poland after the war, he began teaching electrical engineering at the National Centre for Nuclear Research, in Otwock-Świerk. He was also a professor at what is now the Military University of Technology in Warsaw. He died in 1990. The prototype of Kosacki’s detector shown at top is housed at the museum of the Military Institute of Engineering Technology, in Wroclaw, Poland. Land Mines Are Still a Worldwide Problem Land-mine detection has still not been perfected, and the explosive devices are still a huge problem worldwide. On average, one person is killed or injured by land mines and other explosive ordnance every hour, according to UNICEF. Today, it’s estimated that 60 countries are still contaminated by mines and unexploded ordnance. Although portable mine detectors continue to be used, drones have become another detection method. For example, they’ve been used in Ukraine by several humanitarian nonprofits, including the Norwegian People’s Aid and the HALO Trust. Nonprofit APOPO is taking a different approach: training rats to sniff out explosives. The APOPO HeroRATs, as they are called, only detect the scent of explosives and ignore scrap metal, according to the organization. A single HeroRAT can search an area the size of a tennis court in 30 minutes, instead of the four days it would take a human to do so. Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the January 2025 print issue as “The First Land-Mine Detector That Actually Worked.” References There are many well-known Polish scientists, such as Marie Curie, whose discoveries and research made history. As a first-generation Polish-American, I wanted to learn more about the engineering feats of unknown innovators in Poland. After scouring museum websites, I came across Józef Kosacki and his portable mine detector. To learn more about his life and work, I read excerpts of Przemyslaw Slowinski and Teresa Kowalik’s 2020 book, Królewski dar: Co Polska i Polacy dali światu (“Royal Gifts: What Poland and Poles Gave the World”). I also read several articles about Kosacki on both Polish and Scottish websites, including the Polish Press Agency, Wielka Historia (Great History), and Curious St. Andrews, which covers the history of the city.To learn more about the history of land mines, I read articles published by the nonprofit APOPO, the BBC, and the U.S. Army. APOPO researches, develops, and implements land-mine detection technology to clear minefields in Africa.
- Ambitious Projects Could Reshape Geopoliticsby Harry Goldstein on 6. Januara 2025. at 16:30
Over the last year, Spectrum’s editors have noticed an emerging through line connecting several major stories: the centrality of technology to geopolitics. Last month, our cover story, done in partnership with Foreign Policy magazine, was on the future of submarine warfare. And last October, we focused on how sea drones could bolster Taiwan’s “silicon shield” strategy, which rests on Taiwan Semiconductor Manufacturing Co.’s domination of high-end chip manufacturing. So when I asked the curator of this issue, Senior Editor Samuel K. Moore, what he saw as the major theme as we head into 2025, I wasn’t surprised when he said, without hesitation, “geopolitics and technology.” In fact, the same day Sam and I spoke, I forwarded to Spectrum’s Glenn Zorpette a news item about China banning the export to the United States of gallium, germanium and antimony. China’s overwhelming command of rare earths like these is at the heart of Zorpette’s story in this issue. “Inside an American Rare Earth Boomtown” paints a vivid picture of how the United States is trying to nurture a domestic rare earth mining and processing industry. China, meanwhile, is itself looking to minimize its own dependence on imported uranium by building a thorium-based molten-salt reactor in the Gobi Desert. And tensions between China and Taiwan will undoubtedly be further stressed with the opening of TSMC’s first advanced wafer fab in the United States this year. The mitigation of climate change is another key area where politics informs tech advances. In “Startups Begin Geoengineering the Sea”, Senior Associate Editor Emily Waltz takes readers aboard a pair of barges anchored near the Port of Los Angeles. There, two companies, Captura and Equatic, are piloting marine carbon-capture systems to strip CO2 out of ocean water. Whether the results can be measured accurately enough to help companies and countries meet their carbon-reduction goals is an open question. One way for the international community to study the impacts of these efforts could be Deep’s Sentinel program, the first part of which will be completed this year. Our correspondent Liam Critchley, based in England, reports in “Making Humans Aquatic Again” that Deep, located in Bristol, is building a modular habitat that will let scientists live underwater for weeks at a time. Another geopolitical concern also lies at sea: the vulnerability of undersea fiber-optic cables, which carry an ever-growing share of the world’s Internet traffic. The possibility of outages due to attack or accident is so worrying that NATO is funding a project to quickly detect undersea-cable damage and reroute data to satellites. In a provocative commentary on why technology will define the future of geopolitics published in Foreign Affairs in 2023, Eric Schmidt, chair of the Special Competitive Studies Project and the former CEO and chair of Google, argues that “a country’s ability to project power in the international sphere—militarily, economically, and culturally—depends on its ability to innovate faster and better than its competitors.” In this issue, you’ll get an idea of how various nations are faring in this regard. In the coming year, you can look forward to our continuing analysis of how the new U.S. administration’s policies on basic research, climate change, regulation, and immigration impact global competition for the raw materials and human resources that stoke the engines of innovation.
- As EV Sales Stall, Plug-In Hybrids Get a Rebootby Lawrence Ulrich on 6. Januara 2025. at 13:00
Automakers got one thing right: Electrified cars are the future. What they got wrong was assuming that all of those vehicles would run on battery power alone, with gasoline-electric hybrid technology bound for the technological scrap heap. Now the automaking giants are scrambling to course correct. They’re delaying their EV plans, rejiggering factories, and acknowledging what some clear-eyed observers (including IEEE Spectrum) suspected all along: Not every car buyer is ready or able to ditch the internal-combustion engine entirely, stymied by high EV prices or unnerved by a patchy, often-unreliable charging infrastructure. This article is part of our special report Top Tech 2025. Consumers are still looking for electrified rides, just not the ones that many industry pundits predicted. In China, Europe, and the United States, buyers are converging on hybrids, whose sales growth is outpacing that of pure EVs. “It’s almost been a religion that it’s EVs or bust, so let’s not fool around with hybrids or hydrogen,” says Michael Dunne, CEO of Dunne Insights, a leading analyst of China’s auto industry. “But even in the world’s largest market, almost half of electrified vehicle sales are hybrids.” In China, which accounts for about two-thirds of global electrified sales, buyers are flocking to plug-in hybrids, or PHEVs, which combine a gas engine with a rechargeable battery pack. Together, hybrids and all-electric vehicles (a category that the Chinese government calls “new energy vehicles”) reached a milestone last July, outselling internal-combustion engine cars in that country for the first time. PHEV sales are up 85 percent year-over-year, dwarfing the 12-percent gain for pure EVs. The picture is different in the United States, where customers still favor conventional hybrids that combine a gas engine with a small battery—no plug involved, no driver action required. Through September 2024, this year’s conventional hybrid sales in the United States have soared past 1.1 million, accounting for 10.6 percent of the overall car market, versus an 8.9 percent share for pure EVs, according to Wards Intelligence. Including PHEVs, that means a record 21 percent of the country’s new cars are now electrified. But even as overall EV sales rise, plug-in hybrid sales have stalled at a mere 2 percent of the U.S. market. A J.D. Power survey also showed low levels of satisfaction among PHEV owners. Those numbers are a disappointment to automakers and regulators, who have looked to PHEVs as a bridge technology between gasoline vehicles and pure EVs. But what if the problem isn’t plug-in tech per se but the type of PHEVs being offered? Could another market pivot be just around the corner? Plug-in Hybrids: The Next Generation The Ram brand, a part of Stellantis, is betting on new plug-in technology with its Ram 1500 Ramcharger, a brash full-size pickup scheduled to go on sale later this year. The Ramcharger is what’s known as an extended-range electric vehicle, or EREV. An EREV resembles a PHEV, pairing gas and electric powertrains, but with two key differences: An EREV integrates much larger batteries, enough to rack up significant all-electric miles before the internal-combustion engine kicks in; and the gas engine in an EREV is used entirely to generate electricity, not to propel the car directly. BMW demonstrated EREV tech beginning in 2013 with its pint-size i3 REx, which used a tiny motorcycle engine to generate electricity. The Chevrolet Volt, produced from 2010 to 2019, also used its gasoline engine largely as a generator, although it could partially propel the car in certain conditions. The short-lived 2012 Fisker Karma took a similar approach. None of these vehicles made much impact on the market. As a large pickup with conventional styling, the Ramcharger appears to be a more mainstream proposition. The platform for the 2025 Ram 1500 Ramcharger includes a V6 engine, a 92-kilowatt-hour battery pack, and a 130-kilowatt generator. Stellantis The Ramcharger shares its STLA Frame platform and electrical architecture with the upcoming pure-battery Ram 1500 REV, but it’s designed to provide more driving range, along with the option of gasoline fill-ups when there are no EV chargers around. Although the Ramcharger’s 92-kilowatt-hour battery has less than half the capacity of the Ram REV’s humongous upgraded 229-kWh battery (the largest ever in a passenger EV), it’s hardly small. The Ramcharger still packs more battery power than many all-electric vehicles, such as the Ford Mustang Mach-E and the Tesla Model 3. Originally, Ram planned to launch its EV pickup first. But based on consumer response, the brand decided to prioritize the Ramcharger; the Ram 1500 REV will now follow in 2026. Ram estimates that its pickup will travel about 233 kilometers (145 miles) on a fully charged battery, more than three times the stated 71-km electric range of a compact Toyota Prius Prime, among the U.S. market’s longest-range PHEVs. For typical commuters and around-town drivers, “that’s enough range where you could really use this truck as an EV 95 percent of the time,” says Ed Kim, chief analyst of AutoPacific. When the battery runs down below a preset level, the gasoline engine kicks in and generates 130 kilowatts of electricity to the battery, which continues to feed the permanent-magnet motors that propel the car. Since the motors do all the work, an EREV mimics an EV, needing no conventional transmission or driveshaft. But the onboard generator enables the Ramcharger to drive much farther than any conventional EV can. Extended-Range EVs: The Long Haul With batteries and gasoline working in tandem, the Ramcharger has an extended range of 1,110 km (690 miles), dwarfing the 824-km range of the Lucid Air Grand Touring, the current EV long-distance champ. And unlike many PHEVs, whose performance suffers when they rely on battery juice alone, the Ramcharger doesn’t skimp on power. The system churns up a mighty 494 kW (663 horsepower) and 834 newton-meters of torque. Ram estimates the Ramcharger will go from 0 to 60 miles per hour (0 to 97 kilometers per hour) in 4.4 seconds. Using the engine solely as a generator also addresses a couple of key criticisms of PHEVs: skimpy all-electric range, especially if the owner rarely bothers to plug it in, and compromised efficiency when the car runs primarily on gasoline. “Today’s PHEVs can be extremely efficient or extremely inefficient, all depending on how a customer decides to use it,” Kim says. “If they never plug in, it ends up being a hybrid that’s heavier than it needs to be, hauling around a battery it doesn’t really use.” In an EREV, by contrast, the engine can operate constantly in its most frugal operating range, which maximizes its thermal efficiency, rather than constantly revving from idle to high speeds and wasting fuel. The Ramcharger should also excel at actual hauling. Some truck loyalists have soured on all-electric pickups like the Ford F-150 Lightning, whose maximum 515-km range can drop by 50 percent or more when it’s towing or hauling heavy loads. The Ramcharger is rated to tow up to 14,000 pounds (6,350 kilograms) and to handle up to 2,625 pounds (1,190 kg) in its cargo bed. Company officials insist that its range won’t be unduly dinged by heavy lifting, although they haven’t yet revealed details. China Embraces EREVs If China is the bellwether for all things electric, the EREV could be the next big thing. EREVs now make up nearly a third of the nation’s plug-in-hybrid sales, according to Bloomberg New Energy Finance. Beijing-based Li Auto, founded nine years ago by billionaire entrepreneur Xiang Li, makes EREVs exclusively; its sales jumped nearly 50 percent in 2024. Big players, including Geely Auto and SAIC Motor Corp., are also getting into the game. Li Auto’s Li L9, shown here at a 2024 auto show in Tianjin, China, is an extended-range hybrid SUV designed for China’s wide-open spaces.NurPhoto/Getty Images The Li L9 is a formidable example of China’s EREV tech. The crossover SUV pairs a 44.5-kWh nickel cobalt manganese battery with a 1.5-liter turbo engine, for a muscular 330 kW (443 horsepower). Despite having a much smaller battery than the Ramcharger, the all-wheel-drive L9 can cover 180 km on a single charge. Then its four-cylinder engine, operating at a claimed 40.5 percent thermal efficiency, extends the total range to 1,100 km, perfect for China’s wide-open spaces. Dunne notes that in terms of geography and market preferences, China is more similar to the United States than it is to Europe. All-electric EVs predominate in China’s wealthier cities and coastal areas, but in outlying regions—characterized by long-distance drives and scarce public charging—many Chinese buyers are spurning EVs in favor of hybrids, much like their U.S. counterparts. Those parallels may bode well for EREV tech in the United States and elsewhere. When the Ramcharger was first announced in November 2023, it looked like an outlier. Since then, Nissan, Hyundai, Mazda, and General Motors’ Buick brand have all announced EREV plans of their own. Volkswagen’s Scout Traveler SUV, due out in 2027, is another entry in the extended-range electric vehicle (EREV) category.Scout Motors In October, Volkswagen unveiled a revival of the Scout, the charming off-roaders built by International Harvester between 1960 and 1980. The announcement came with a surprise twist: Both the Scout Traveler SUV and Scout Terra pickup will offer EREV models. (In a nod to the brand’s history, Scout’s EREV system is called “Harvester.”) The hybrid versions will have an estimated 500-mile (800-km) range, easily beating the 350-mile (560-km) range of their all-electric siblings. According to a crowdsourced tracker, about four-fifths of people reserving a Scout are opting for an EREV model. Toyota Gets the Last Laugh For Toyota, the market swing toward hybrid vehicles comes as a major vindication. The world’s largest automaker had faced withering attacks for being slow to add all-electric vehicles to its lineup, focusing more on hybrids as a transitional technology. In 2021, Toyota’s decision to redesign its Sienna minivan exclusively as a hybrid seemed risky. Its decision to do the same with the 2025 Camry, America’s best-selling sedan, seemed riskier still. Now Toyota and its luxury brand, Lexus, control nearly 60 percent of the hybrid market in North America. Toyota was criticized for sticking with hybrid cars like the Prius, but it now controls nearly 60 percent of the North American hybrid market. Toyota Although Toyota hasn’t yet announced any plans for an EREV, the latest Toyota Prius shows the fruits of five generations and nearly 30 years of hybrid R&D. A 2024 Prius starts at around $29,000, with up to 196 horsepower (146 kW) from its 2.0-liter Atkinson-cycle engine and tiny lithium-ion battery—enough to go 0-97 km/h in a little over 7 seconds. The larger 2025 Camry hybrid is rated at up to 48 miles per gallon (20 kilometers per liter). Market analysts expect the Toyota RAV4, the most-popular SUV in the United States, to go hybrid-only around 2026. David Christ, Toyota’s North American general manager, indicated that “the company is not opposed” to eventually converting all of its internal-combustion models to hybrids. Meanwhile, GM, Ford, Honda, and other brands are rapidly introducing more hybrids as well. Stellantis is offering plug-in models from Jeep, Chrysler, and Dodge in addition to the EREV Ramcharger. Even the world’s most rarefied sports-car brands are adopting hybrid technology because of its potential to improve performance significantly—and to do so without reducing fuel economy. [For more on high-performance hybrids, see "A Hybrid Car That's Also a Supercar".] The electricity-or-nothing crowd may regard hybrids as a compromise technology that continues to prop up demand for fossil fuels. But by meeting EV-skeptical customers halfway, models that run on both batteries and gasoline could ultimately convert more people to electrification, hasten the extinction of internal-combustion dinosaurs, and make a meaningful dent in carbon emissions.
- A Hybrid Car That’s Also a Supercarby Lawrence Ulrich on 6. Januara 2025. at 13:00
Aside from having four wheels, it’s hard to see what a US $30,000 Toyota Camry has in common with a $3 million Ferrari F80. But these market bookends are examples of an under-the-radar tech revolution. From budget transportation to hypercars, every category of internal-combustion car is now harnessing hybrid tech. In “As EV Sales Stall, Plug-In Hybrids Get a Reboot,” I describe the vanguard of this new hybrid boom: extended-range EVs like the 2025 Ram 1500 Ramcharger, which boasts a range of more than 1,000 kilometers. The world’s leading performance brands are also embracing hybrid EV tech—not merely to cut emissions or boost efficiency but because of the instant-on, highly controllable torque produced by electric motors. Hybridized models from BMW, Corvette, Ferrari, and Porsche are aimed at driving enthusiasts who have been notoriously resistant to electric cars. Sam Fiorani, vice-president of global vehicle forecasting for AutoForecast Solutions, predicts that “nearly all light-duty internal-combustion engines are likely to be hybridized in one form or another over the next decade.” Even mainstream electrified models, Fiorani notes, routinely generate acceleration times that were once limited to exotic machines. “The performance offered by electric motors cannot be accomplished by gas-powered engines alone without impacting emissions,” Fiorani says. “The high-end brands will need to make the leap that only an electric powertrain can practically provide.” That leap is well underway, as I experienced firsthand during test drives of the BMW M5, Corvette E-Ray, and Ferrari 296 GTB. These performance hybrids outperform their internal-combustion-only equivalents in almost every way. Most incorporate all-wheel-drive, along with torque vectoring, energy harvesting, and other engineering tricks that are possible with the inclusion of electric motors. 2025 BMW M5: The Heavyweight Hybrid The BMW M5 sedan is a literal heavyweight, tipping the scales at 2,435 kilograms.BMW The 2025 BMW M5 sedan adds plug-in hybrid power to one of the company’s iconic models. A twin-turbo, 4.4-liter V-8 engine pairs with a fifth-generation BMW eMotor and a 14.8-kilowatt-hour battery. The M5 can cruise silently on battery power for 69 km (43 miles). The biggest downside is the car’s crushing curb weight—up to 2,500 kilograms (5,500 pounds)—and poor fuel economy once its electric range is spent. The upside is 527 kilowatts (717 horsepower) of Teutonic aggression, which I experienced from Munich to the Black Forest, making Autobahn sprints at up to 280 kph (174 mph). Ferrari 296 GTB and F80: Top of the Hybrid Food Chain Although the Ferrari 296 GTB is a plug-in hybrid, its goal is high performance, not high gas mileage.Ferrari Ferrari’s swoopy 296 GTB is a plug-in hybrid with a 122-kW electric motor sandwiched between a 3.0-liter V-6 and an F1 automated gearbox, producing a total of 602 kilowatts (819 horsepower). The 296 GTB can cover just 25 km on electricity alone, but that could be enough to pass through European low-emission zones, where internal-combustion cars may eventually be banned. Of course, the 296 GTB’s main goal is high performance, not high gas mileage. A digital brake-by-wire system makes it Ferrari’s shortest-stopping production car, and the brakes regenerate enough energy that I was able to recharge the 7.5-kWh battery on-the-fly in roughly 10 minutes of driving. Despite its modest V-6 engine, the 296 GTB turns faster laps around Ferrari’s Fiorano test circuit than any V-8 model in company history. The Ferrari weighs in at 1,467 kilograms (3,234 pounds), unusually svelte for a hybrid, which aids its sharp handling. At the top of the hybrid food chain is Ferrari’s F80, a hypercar inspired by Formula 1 racers. It pairs a V-6 with five electric motors—two in turbochargers, three for propulsion—for a total of 882 kW (1,200 horsepower). The two electric motors driving the front wheels allow for independent torque vectoring. Only 799 of the F80s will be built, but those numbers do not capture the cultural impact of harnessing hybrid tech in one of the world’s most exclusive sports cars. Porsche 911 GTS T-Hybrid: A First for Porsche The Porsche 911 now has its first electrified design. The new Porsche 911 GTS T-Hybrid keeps the model’s classic flat-six, rear-engine layout but adds a 40-kW electric motor, for a combined 391 kW (532 horsepower). Another 20-kW motor drives a single electric turbocharger, which has much less lag and wasted heat than mechanical turbochargers do. Porsche’s 911 GTS T-Hybrid is the carmaker’s first electric car offering.Porsche The 911 GTS T-Hybrid’s 400-volt system quickly spools that turbo up to 120,000 rpm; peak turbo boost arrives in less than one second, versus more than three seconds before [[ Corvette E-Ray: An Affordable Hybrid Supercar The Porsche 911’s main rival, the Corvette, is likewise coming out with a hybrid EV. The Corvette E-Ray, which starts at $108,595, is intended to make supercar tech affordable to a broader clientele. The eighth-generation Corvette was designed with an aluminum tunnel along its spine to accommodate optional hybrid power. Buy the E-Ray version, and that tunnel is stuffed with 80 pouch-style, nickel cobalt manganese Ultium battery cells that augment a V-8 engine. The small, 1.9-kWh battery pack is designed for rapid charge and discharge: It can spit out 525 amps in short bursts, sending up to 119 kW (160 horsepower) to an electrified front axle. Hybrids like the Corvette E-Ray should appeal to purists who’ve thus far resisted all-electric cars.Chevrolet History’s first all-wheel-drive Corvette is also the fastest in a straight line, with a computer-controlled 2.5-second launch to 102 kilometers per hour (60 miles per hour) . No matter how hard I drove the E-Ray in the Berkshires of Massachusetts, I couldn’t knock its battery below about 60 percent full. Press the Charge+ button, and the Corvette uses energy recapture to fill its battery within 5 to 6 kilometers of driving. Battery and engine together produce a hefty 482 kW (655 horsepower), yet I got 25 miles per gallon during gentle highway driving, on par with lesser-powered Corvettes. Even more than other customers, sports-car buyers seem resistant to going full-EV. Aside from a handful of seven-figure hypercars, there are currently no electric two-seaters for sale anywhere in the world. Tadge Juechter, Corvette’s recently retired executive chief engineer, notes that many enthusiasts are wedded to the sound and sensation of gasoline engines, and are leery of the added weight and plummeting range of EVs driven at high velocity. That resistance doesn’t seem to extend to hybrids, however. The Corvette E-Ray, Juechter says, was specifically designed to meet those purists halfway, and “prove they have nothing to fear from electrification.”
- SwitchBot Introduces Modular Mobile Home Robotby Evan Ackerman on 5. Januara 2025. at 13:00
Earlier this year, we reviewed the SwitchBot S10, a vacuuming and wet mopping robot that uses a water-integrated docking system to autonomously manage both clean and dirty water for you. It’s a pretty clever solution, and we appreciated that SwitchBot was willing to try something a little different. At CES this week, SwitchBot introduced the K20+ Pro, a little autonomous vacuum that can integrate with a bunch of different accessories by pulling them around on a backpack cart of sorts. The K20+ Pro is SwitchBot’s latest effort to explore what’s possible with mobile home robots. SwitchBot’s small vacuum can transport different payloads on top.SwitchBot What we’re looking at here is a “mini” robotic vacuum (it’s about 25 centimeters in diameter) that does everything a robotic vacuum does nowadays: It uses lidar to make a map of your house so that you can direct it where to go, it’s got a dock to empty itself and recharge, and so on. The mini robotic vacuum is attached to a wheeled platform that SwitchBot is calling a “FusionPlatform” that sits on top of the robot like a hat. The vacuum docks to this platform, and then the platform will go wherever the robot goes. This entire system (robot, dock, and platform) is the “K20+ Pro multitasking household robot.” SwitchBot refers to the K20+ Pro as a “smart delivery assistant,” because you can put stuff on the FusionPlatform and the K20+ Pro will move that stuff around your house for you. This really doesn’t do it justice, though, because the platform is much more than just a passive mobile cart. It also can provide power to a bunch of different accessories, all of which benefit from autonomous mobility: The SwitchBot can carry a variety of payloads, including custom payloads.SwitchBot From left to right, you’re looking at an air circulation fan, a tablet stand, a vacuum and charging dock and an air purifier and security camera (and a stick vacuum for some reason), and lastly just the air purifier and security setup. You can also add and remove different bits, like if you want the fan along with the security camera, just plop the security camera down on the platform base in front of the fan and you’re good to go. This basic concept is somewhat similar to Amazon’s Proteus robot, in the sense that you can have one smart powered base that moves around a bunch of less smart and unpowered payloads by driving underneath them and then carrying them around. But SwitchBot’s payloads aren’t just passive cargo, and the base can provide them with a useful amount of power. A power port allows you to develop your own payloads for the robot.SwitchBot SwitchBot is actively encouraging users to “to create, adapt, and personalize the robot for a wide variety of innovative applications,” which may include “3D-printed components [or] third-party devices with multiple power ports for speakers, car fridges, or even UV sterilization lamps,” according to the press release. The maximum payload is only 8 kilograms, though, so don’t get too crazy. Several SwitchBots can make bath time much more enjoyable.SwitchBot What we all want to know is when someone will put an arm on this thing, and SwitchBot is of course already working on this: SwitchBot’s mobile manipulator is still in the lab stage.SwitchBot The arm is still “in the lab stage,” SwichBot says, which I’m guessing means that the hardware is functional but that getting it to reliably do useful stuff with the arm is still a work in progress. But that’s okay—getting an arm to reliably do useful stuff is a work in progress for all of robotics, pretty much. And if SwitchBot can manage to produce an affordable mobile manipulation platform for consumers that even sort of works, that’ll be very impressive.
- CES 2025 Preview: Needleless Injections, E-Skis, and Moreby Gwendolyn Rak on 4. Januara 2025. at 12:00
This weekend, I’m on my way to Las Vegas to cover this year’s Consumer Electronics Show. I’ve scoured the CES schedule and lists of exhibitors in preparation for the event, where I hope to find fascinating new tech. After all, some prep is required given the size of the show: CES span 12 venues and more than 2.5 million square feet of exhibit space—a good opportunity to test out devices that will be on display, like these shoe attachments that track muscle load for athletes (and journalists running between demos), or an exoskeleton to help out on hikes through the Mojave Desert. Of course, AI will continue to show up in every device you might imagine it to, and many you wouldn’t. This year, there will be AI-enabled vehicle sensors and PCs, as well as spice dispensers, litter boxes, and trash cans. With AI systems for baby care and better aging, the applications practically range from cradle to grave. I’m also looking forward to discovering technology that could change the way we interact with our devices, such as new displays in our personal vehicles and smart eyewear to compete with Ray-Ban Meta glasses. Hidden among the big names showcasing their latest tech, startups and smaller companies will be exhibiting products that could become the next big thing, and the innovative engineering behind them. Here are a few of the gadgets and gizmos I’m planning to see in person this week. Needle-Free Injections Imagine a world in which you could get a flu shot—or any injection—without getting jabbed by a needle. That’s what Dutch company FlowBeams aims to create with its device, which injects a thin jet of liquid directly into the skin. With a radius of 25 micrometers, the jet measures about one-tenth the size of a 25-gauge needle often used for vaccines. Personally, I’ve dealt with my fair share of needles from living with type 1 diabetes for nearly two decades, so this definitely caught my eye. Delivering insulin is, in fact, one of the medical applications the FlowBeams team imagines the tech could eventually be used for. But healthcare isn’t the only potential use. It could also become a new, supposedly painless way to get cosmetic fillers or a tattoo. Electric Skis to Help With Hills Skiing may initially seem like the recreational activity least in need of a motorized boost—gravity is pretty reliable on its own. But if you, like me, actually prefer cross country skiing, it’s an intriguing idea. Now being brought to life by a Swiss startup, E-Skimo was created for ski mountaineering (A.K.A. “skimo”), a type of backcountry skiing that involves climbing up a mountain to then speed back down. The battery-powered, detachable device uses a belt of rubber tread to help skiers get to higher peaks in less time. Unfortunately, Vegas will be a bit too balmy for live demos. A Fitbit for Fido—and for Bessie Nearly any accessory you own today—watches, rings, jewelry, or glasses—can be replaced by a wearable tech alternative. But what about your dog? Now, we can extend our obsession with health metrics to our pets with the next generation of smart collars from companies like Queva, which is debuting a collar that grades your dog’s health on a 100-point scale. While activity-tracking collars have been on the market for several years, these and other devices, like smart pet flaps, are making our pets more high-tech than ever. And the same is true for livestock: The first wearable device for tracking a cow’s vitals will also be at CES this year. While not exactly a consumer device, it’s a fascinating find nonetheless. Real-Time Translation Douglas Adams fans, rejoice: Inspired by the Babel fish from The Hitchhiker’s Guide to the Galaxy, Timekettle’s earbuds make (nearly) real-time translation possible. The company’s latest version operates with a new, proprietary operating system to offer two-way translation during phone or video calls on any platform. The US $449 open-ear buds translate between more than 40 languages and 93 accents, albeit with a 3 to 5 second delay. “Hormometer” to Subdue Stress Ironically, everybody seems stressed out about cortisol, the hormone that regulates your body’s stress response. To make hormone testing more accessible, Eli Health has created a device, dubbed the “Hormometer,” which detects either cortisol or progesterone levels from a quick saliva sample. After 20 minutes, the user scans the tester with a smartphone camera and gets results. At about $8 per test, each one is much less expensive than other at-home or lab tests. However, the company functions as a subscription service, starting at about $65 per month with a 12-month commitment. AR Binoculars to Seamlessly ID the Natural World I have a confession to make: For someone who once considered a career in astronomy, I can identify embarrassingly few constellations. Alas, after Orion and the Big Dipper, I have trouble finding many of these patterns in the night sky. Stargazing apps help, but looking back and forth between a screen and the sky tends to ruin the moment. Unistellar’s Envision smart binoculars, however, use augmented reality to map the stars, tag comets, and label nebulae directly in your line of sight. During the day, they can identify hiking trails or tell you the altitude of a summit on the horizon. When it comes to identifying the best technology on the horizon, though, leave that job to IEEE Spectrum.
- IEEE Young Professionals Talked Sustainability Tech at Climate Week NYCby Chinmay Tompe on 3. Januara 2025. at 19:00
The IEEE Young Professionals Climate and Sustainability Task Force focuses on empowering emerging leaders to contribute to sustainable technology and climate action, fostering engagement and leading initiatives that address climate change–related challenges and potential solutions. Since its launch in 2023, the CSTF has been engaging them in the conversation of how to get involved in the climate and sustainability sectors. The group held a panel session during last year’s Climate Week NYC, which ran from 22 to 29 September to coincide with the U.N. Summit of the Future. Climate Week NYC is the largest annual climate event, featuring more than 600 activities throughout New York City. It brings together leaders from the business sector, government, and private organizations to promote climate action and innovation, highlighting the urgent need for transformative change. The U.N. summit, held on 22 and 23 September, aimed to improve global governance and establish a “pact for the future” focusing on the climate crisis and sustainable development. The IEEE panel brought together climate-change experts from organizations and government agencies worldwide—including IEEE, the Global Renewables Alliance, and the SDG7 Youth Constituency—to highlight the intersection of technology, policy, and citizen engagement. Participants from 30 countries attended the panel session. The event underscored IEEE’s commitment to fostering technological solutions for climate challenges while emphasizing the crucial role of young professionals in driving innovation and change. As the world moves toward critical climate deadlines, the dialogue demonstrated that success is likely to require a combination of technical expertise, policy understanding, and inclusive participation from all stakeholders. The panel was moderated by IEEE Member Sajith Wijesuriya, chair of the task force, and IEEE Senior Member Sukanya S. Meher, the group’s communications lead and one of the authors of this article. The moderators guided the discussion through key topics such as organizational collaboration, youth engagement, skill development, and technological advancements. The panel also highlighted why effective climate solutions must combine technical innovation with inclusive policymaking, ensuring the transition to a sustainable future leaves no community behind. Engaging youth in mitigating climate change The panel featured young professionals who emphasized the importance of engaging the next generation of engineers, climate advocates, and students in the climate-action movement. “Young people, especially women living in [rural] coastal communities, are at the front lines of the climate crisis,” said Grace Young, the strategy and events manager at nonprofit Student Energy, based in Vancouver. Women and girls are disproportionately impacted by climate change because “they make up the majority of the world’s poor, who are highly dependent on local natural resources for their livelihood,” according to the United Nations. Women and girls are often responsible for securing food, water, and firewood for their families, the U.N. says, and during times of drought and erratic rainfall, it takes more time and work to secure income and resources. That can expose women and girls to increased risks of gender-based violence, as climate change exacerbates existing conflicts, inequalities, and vulnerabilities, according to the organization. Climate advocates, policymakers, and stakeholders “must ensure that they [women] have a seat at the table,” Young said. One way to do that is to implement energy education programs in preuniversity schools. “Young people must be heard and actively involved in shaping solutions,” said Manar Elkebir, founder of EcoWave, a Tunisian youth-led organization that focuses on mobilizing young people around environmental issues. During the panel session, Elkebir shared her experience collaborating with IRENA—a global intergovernmental group—and the Italian government to implement energy education programs in Tunisian schools. She also highlighted the importance of creating inclusive, nonintimidating spaces for students to engage in discussions about the transition to cleaner energy and other climate-related initiatives. Young professionals “are not just the leaders of tomorrow; we are the changemakers of today,” she said. Another group that is increasing its focus on youth engagement and empowerment is the World Meteorological Organization, headquartered in Geneva. The WMO’s Youth Climate Action initiative, for example, lets young people participate in policymaking and educational programs to deepen their understanding of climate science and advocacy. Such initiatives recognize that the next generation of leaders, scientists, and innovators will be generating transformative changes, and they need to be equipped with knowledge and tools, said panelist Ko Barret, WMO deputy secretary general. Other discussions focused on the importance of engaging young professionals in the development and implementation of climate change technology. There are an abundance of career opportunities in the field, particularly in climate data analytics, said Bala Prasanna, IEEE Region 1 director. “Both leadership skills and multidisciplinary learning are needed to stay relevant in the evolving climate and sustainability sectors,” Prasanna said. Although “climate change represents humanity’s greatest threat,” said Saifur Rahman, 2023 IEEE president, technology-driven solutions were notably underrepresented at climate conferences such as COP27. Rahman urged young engineers to take ownership of the problem, and he directed them to resources such as IEEE’s climate change website, which offers information on practical solutions. “Technology practitioners will be at the forefront of developing public-private partnerships that integrate cutting-edge technologies with national energy strategies,” said A. Anastasia Kefalidou, acting chief of the IRENA office in New York. “The IRENA NewGen Renewable Energy Accelerator plays a key role in nurturing a new generation of technology practitioners, who can lead innovation and digital transformation in the energy sector.” The accelerator program provides budding entrepreneurs ages 18 to 35 with mentors and resources to scale projects focused on energy technologies and climate adaptation. “The dialogue hosted by IEEE Young Professionals during this incredible Climate Week event is helping to bridge the gap between emerging innovators and institutional efforts,” Young added, “providing a platform for fresh perspectives on renewable energy and climate solutions.” Focus on global partnerships Fostering global partnerships was on the panelists’ minds. Collaboration among governments, private companies, and international organizations could accelerate clean energy transitions, particularly in emerging economies, said Ana Rovzar, director of policy and public affairs at the Global Renewables Alliance in Brussels. She highlighted the need for tailored approaches to address regional challenges in climate resilience and energy access. Environmental journalist Ciara Kavanagh shared how she has been inspired by genuine intersectoral discussions among technical experts, policymakers, communicators, and leaders. The communications specialist at the U.N. Environment Programme in New York discussed how hearing from technical experts can help communicators like her understand renewable technologies. “If the myriad marvelous ideas coming out of the lab aren’t communicated widely and effectively, we all risk falling short of real impact,” Kavanagh said. She called on fellow young professionals to work together to show the world what a cleaner, greener future powered by renewable energy could look like, and to “ensure the power to build that future is in the hands and homes of those who need it, regardless of where they live.” At COP28, COP29, and G20, the United Nations outlined ambitious global goals in what is known as the UAE Consensus. One of the goals is tripling renewable energy capacity and doubling energy efficiency by 2030. Kefalidou highlighted IRENA’s commitment to tracking the targets by analyzing global technology trends while emphasizing the development of next-generation solutions, including advanced solar PV systems, offshore wind farms, and smart-grid technologies. IRENA’s tracking shows that despite rapid growth in renewable energy, the UAE Consensus’s current plans are projected to achieve only 50 percent of the target capacity by the deadline. IRENA regularly publishes detailed progress reports including renewable capacity statistics and the World Energy Transitions Outlook. Not even 20 percent of the U.N.’s Sustainable Development Goals are on track to reach their targets, and more than 40 percent of governments and companies lack net-zero targets, said Shreenithi Lakshmi Narasimhan. In a call to action, the CSTF member and vice chair of the New York IEEE local group emphasized the need for accelerated climate action. “The tools young professionals need to succeed are already in our hands,” Narasimhan said. “Now we must invest strategically, overcome geopolitical barriers, and drive toward real solutions. The stakes couldn’t be higher.” Josh Oxby, energy advisor for the U.K.’s Parliamentary Office of Science and Technology, emphasized the importance of empowering young changemakers and forming collaborations among private, public, and third-sector organizations to develop a workforce to assist with energy transition. Third-sector organizations include charities, community groups, and cooperative societies. “Climate Week NYC has highlighted the importance of taking a step back to evaluate the conventional scrutiny of—and engagement with—policy and governance processes,” Oxby said. “Young professionals are the changemakers of today. Their way of forward thinking and reapproaching frameworks for the inclusivity of future generations is a testament to their dynamic and reflective mindset.” Tech-driven strategies to address the climate crisis CSTF member Chinmay Tompe highlighted the potential of breakthrough technologies such as quantum computing and simulation in addressing climate change and driving the energy transition. “Although we have yet to achieve practical quantum utility, recent advancements in the field offer promising opportunities,” Tompe said. “Simulating natural processes, like molecular and particle fluid dynamics, can be achieved using quantum systems. These technologies could pave the way for cleaner energy solutions, including optimized reactor designs, enhanced energy storage systems, and more efficient energy distribution networks. However, realizing this potential requires proactive efforts from policymakers to support innovation and implementation.” Nuclear energy emerged as a crucial component of the clean energy discussion. Dinara Ermakova advocated for the role nuclear technology can play in achieving net-zero emissions goals, particularly via small modular reactors. Ermakova is an innovation chair for the International Youth Nuclear Congress in Berkeley, Calif. IYNC is a nonprofit that connects students and young professionals worldwide involved in nuclear science and technology. Marisa Zalabak, founder and CEO of Open Channel Culture, highlighted the ethical dilemmas of technological solutions, specifically those regarding artificial intelligence. “AI is not a magic bullet,” Zalabak cautioned, “but when governed ethically and responsibly, it can become a powerful tool for driving climate solutions while safeguarding human rights and planetary health.” She emphasized the importance of regenerative design systems and transdisciplinary collaboration in creating sustainable solutions: “This event reinforced the importance of human collaboration across sectors and the power of youth-driven innovation in accelerating climate action dedicated to human and environmental flourishing for current and future generations.” Implications of climate tech and policy IEEE CSTF showed its commitment to sustainability throughout the event. Panelists were presented with customized block-printed shawls made with repurposed fabric. The initiative was led by CSTF member Kalyani Matey and sourced from Divyang Creations, a social enterprise in Latur, India, employing people with disabilities. Leftover refreshments were donated to New York City food banks. After the panel session concluded, Rahman said participating in it was fulfilling. He commended the young professionals for their “enthusiasm and commitment to help develop a road map to implement some of the SDG goals.” The outcomes of the discussions were presented at the U.N. Climate Change Conference, which was held in Baku, Azerbaijan, from 11 to 22 November.
- Video Friday: Sleepy Robot Babyby Evan Ackerman on 3. Januara 2025. at 17:30
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANY German Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANY ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN RSS 2025: 21–25 June 2025, LOS ANGELES IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, KOREA IFAC Symposium on Robotics: 15–18 July 2025, PARIS RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL Enjoy today’s videos! It’s me. But we can all relate to this child android robot struggling to stay awake. [ Osaka University ] For 2025, the RoboCup SPL plans an interesting new technical challenge: Kicking a rolling ball! The velocity and start position of the ball can vary and the goal is to kick the ball straight and far. In this video, we show our results from our first testing session. [ Team B-Human ] When you think of a prosthetic hand you probably think of something similar to Luke Skywalker’s robotic hand from Star Wars, or even Furiosa’s multi-fingered claw from Mad Max. The reality is a far cry from these fictional hands: upper limb prostheses are generally very limited in what they can do, and how we can control them to do it. In this project, we investigate non-humanoid prosthetic hand design, exploring a new ideology for the design of upper limb prostheses that encourages alternative approaches to prosthetic hands. In this wider, more open design space, can we surpass humanoid prosthetic hands? [ Imperial College London ] Thanks, Digby! A novel three-dimensional (3D) Minimally Actuated Serial Robot (MASR), actuated by a robotic motor. The robotic motor is composed of a mobility motor (to advance along the links) and an actuation motor [to] move the joints. [ Zarrouk Lab ] This year, Franka Robotics team hit the road, the skies and the digital space to share ideas, showcase our cutting-edge technology, and connect with the brightest minds in robotics across the globe. Here is 2024 video recap, capturing the events and collaborations that made this year unforgettable! [ Franka Robotics ] Aldebaran has sold an astonishing number of robots this year. [ Aldebaran ] The advancement of modern robotics starts at its foundation: the gearboxes. Ailos aims to define how these industries operate with increased precision, efficiency and versatility. By innovating gearbox technology across diverse fields, Ailos is catalyzing the transition towards the next wave of automation, productivity and agility. [ Ailos Robotics ] Many existing obstacle avoidance algorithms overlook the crucial balance between safety and agility, especially in environments of varying complexity. In our study, we introduce an obstacle avoidance pipeline based on reinforcement learning. This pipeline enables drones to adapt their flying speed according to the environmental complexity. After minimal fine-tuning, we successfully deployed our network on a real drone for enhanced obstacle avoidance. [ MAVRL via Github ] Robot-assisted feeding promises to empower people with motor impairments to feed themselves. However, research often focuses on specific system subcomponents and thus evaluates them in controlled settings. This leaves a gap in developing and evaluating an end-to-end system that feeds users entire meals in out-of-lab settings. We present such a system, collaboratively developed with community researchers. [ Personal Robotics Lab ] A drone’s eye-view reminder that fireworks explode in 3D. [ Team BlackSheep ]
- This Year, RISC-V Laptops Really Arriveby Matthew S. Smith on 3. Januara 2025. at 14:00
Buried in the inner workings of your laptop is a secret blueprint, dictating the set of instructions the computer can execute and serving as the interface between hardware and software. The instructions are immutable and hidden behind proprietary technology. But starting in 2025, you could buy a new and improved laptop whose secrets are known to all. That laptop will be fully customizable, with both hardware and software you’ll be able to be modified to fit your needs. This article is part of our special report Top Tech 2025. RISC-V is an open-source instruction set architecture (ISA) poised to make personal computing more, well, personal. Though RISC-V is still early in its life cycle, it’s now possible to buy fully functional computers with this technology inside—a key step toward providing a viable alternative to x86 and Arm in mainstream consumer electronics. “If we look at a couple of generations down the [software] stack, we’re starting to see a line of sight to consumer-ready RISC-V in something like a laptop, or even a phone,” said Nirav Patel, CEO of laptop maker Framework. Patel’s company plans to release a laptop that can support a RISC-V mainboard in 2025. Though still intended for early adopters and developers, it will be the most accessible and polished RISC-V laptop yet, and it will ship to users with the same look and feel as the Framework laptops that use x86 chips. RISC-V Is Coming to a Laptop Near You An ISA is a rulebook that defines the set of valid instructions programs can execute on a processor. Like other ISAs, RISC-V includes dozens of instructions, such as loading data into memory or floating-point arithmetic operations. But RISC-V is open source, which sets it apart from closed ISAs like x86 and Arm. It means anyone can use RISC-V without a license fee. It also makes RISC-V hardware easy to customize, because there are no license restrictions on what can or can’t be modified. Researchers at University of California, Berkeley’s Parallel Computing Laboratory began developing the RISC-V ISA in 2010 based on established reduced instruction set computer (RISC) principles, and it’s already in use by companies looking to design inexpensive, specialized chips: Alibaba put RISC-V to work in a chip development platform for edge computing, and Western Digital used RISC-V for storage controllers. Now, a small group of companies and enthusiasts are laying the groundwork for bringing RISC-V to mainstream consumer devices. Among these pioneers is software engineer Yuning Liang, who found himself drawn to the idea while sidelined by COVID lockdowns in Shenzhen, China. Unable to continue previous work, “I had to ask, what can I do here?” says Liang. “Mark Himelstein, the former CTO of RISC-V [International], mentioned we should do a laptop on a 12-nanometer RISC-V test chip.” Because the 12-nm node is an older production process than CPUs use today, each chip costs less. DeepComputing released the first RISC-V laptop, Roma, in 2023, followed by the DC-Roma II a year later.DeepComputing The project had a slow start amid COVID-related supply-chain issues but eventually led to the 2023 release of the world’s first RISC-V laptop, the Roma, by DeepComputing—a Hong Kong–based company Liang founded the prior year. It was followed in 2024 by the DC-Roma II, which shipped with the open-source Ubuntu operating system preinstalled, making it capable of basic computing tasks straight out of the box. DeepComputing is now working in partnership with Framework, a laptop maker founded in 2019 with the mission to “fix consumer electronics,” as it’s put on the company’s website. Framework sells modular, user-repairable laptops that owners can keep indefinitely, upgrading parts (including those that can’t usually be replaced, like the mainboard and display) over time. “The Framework laptop mainboard is a place for board developers to come in and create their own,” says Patel. The company hopes its laptops can accelerate the adoption of open-source hardware by offering a platform where board makers can “deliver system-level solutions,” Patel adds, without the need to design their own laptop in-house. Closing the Price and Performance Gap The DeepComputing DC-Roma II laptop marked a major milestone for open source computing, and not just because it shipped with Ubuntu installed. It was the first RISC-V laptop to receive widespread media coverage, especially on YouTube, where video reviews of the DC-Roma II (as well as other RISC-V single-board computers, such as the Milk-V Pioneer and Lichee Pi 4A) collectively received more than a million views. Even so, Liang was quick to acknowledge a flaw found by many online reviewers: The RISC-V chip in the DC-Roma II performs well behind x86 and Arm-powered alternatives. DeepComputing wants to tackle that in 2025 with the DC-Roma III, according to Liang. In the coming year, “performance will be much better. It’ll still be on 12-nanometer [processors], but we’re going to upgrade the CPU’s performance to be more like an Arm Cortex-A76,” says Liang. The Cortex-A76 is a key architecture to benchmark RISC-V against, as it’s used by chips in high-volume single-board computers like the Raspberry Pi 5. Liang isn’t alone in his dream of high-performance RISC-V chips. Ventana, founded in 2018, is designing high-performance data-center chips that rely on the open-source ISA. Balaji Baktha, Ventana’s founder and CEO, is adamant that RISC-V chips will go toe-to-toe with x86 and Arm across a variety of products. “There’s nothing that is ISA specific that determines if you can make something high performance, or not,” he says. “It’s the implementation of the microarchitecture that matters.” DeepComputing also wants to make RISC-V appealing with lower prices. At about US $600, the DC-Roma II isn’t much more expensive than a midrange Windows laptop like an Acer Aspire or Dell Inspiron, but online reviews note its performance is more in line with that of budget laptops that sell for much less. Liang says that’s due to the laptop’s low production volume: The DC-Roma II was produced in “the low tens of thousands,” according to Liang. DeepComputing hopes to increase production to 100,000 units for the DC-Roma III, he adds. If that pans out, it should make all DeepComputing laptops more competitive with those using x86 and Arm. That’s important to Liang, who sees affordability as synonymous with openness; both lower the barriers for newcomers. “If we can open up even the chip design, then one day, even students at schools and universities can come into class and design their own chips, with open tools,” says Liang. “With openness, you can choose to build things yourself from zero.”
- Reversible Computing Escapes the Lab in 2025by Dina Genkina on 2. Januara 2025. at 14:00
Michael Frank has spent his career as an academic researcher working over three decades in a very peculiar niche of computer engineering. According to Frank, that peculiar niche’s time has finally come. “I decided earlier this year that it was the right time to try to commercialize this stuff,” Frank says. In July 2024, he left his position as a senior engineering scientist at Sandia National Laboratories to join a startup, U.S. and U.K.-based Vaire Computing. Frank argues that it’s the right time to bring his life’s work—called reversible computing—out of academia and into the real world because the computing industry is running out of energy. “We keep getting closer and closer to the end of scaling energy efficiency in conventional chips,” Frank says. According to an IEEE semiconducting industry road map report Frank helped edit, by late in this decade the fundamental energy efficiency of conventional digital logic is going to plateau, and “it’s going to require more unconventional approaches like what we’re pursuing,” he says. This article is part of our special report Top Tech 2025. As Moore’s Law stumbles and its energy-themed cousin Koomey’s Law slows, a new paradigm might be necessary to meet the increasing computing demands of today’s world. According to Frank’s research at Sandia, in Albuquerque, reversible computing may offer up to a 4,000x energy-efficiency gain compared to traditional approaches. “Moore’s Law has kind of collapsed, or it’s really slowed down,” says Erik DeBenedictis, founder of Zettaflops, who isn’t affiliated with Vaire. “Reversible computing is one of just a small number of options for reinvigorating Moore’s Law, or getting some additional improvements in energy efficiency.” Vaire’s first prototype, expected to be fabricated in the first quarter of 2025, is less ambitious—it is producing a chip that, for the first time, recovers energy used in an arithmetic circuit. The next chip, projected to hit the market in 2027, will be an energy-saving processor specialized for AI inference. The 4,000x energy-efficiency improvement is on Vaire’s road map but probably 10 or 15 years out. “I feel that the technology has promise,” says Himanshu Thapliyal, associate professor of electrical engineering and computer science at the University of Tennessee, Knoxville, who isn’t affiliated with Vaire. “But there are some challenges also, and hopefully, Vaire Computing will be able to overcome some of the challenges.” What Is Reversible Computing? Intuitively, information may seem like an ephemeral, abstract concept. But in 1961, Rolf Landauer at IBM discovered a surprising fact: Erasing a bit of information in a computer necessarily costs energy, which is lost as heat. It occurred to Landauer that if you were to do computation without erasing any information, or “reversibly,” you could, at least theoretically, compute without using any energy at all. Landauer himself considered the idea impractical. If you were to store every input and intermediate computation result, you would quickly fill up memory with unnecessary data. But Landauer’s successor, IBM’s Charles Bennett, discovered a workaround for this issue. Instead of just storing intermediate results in memory, you could reverse the computation, or “decompute,” once that result was no longer needed. This way, only the original inputs and final result need to be stored. Take a simple example, such as the exclusive-OR, or XOR gate. Normally, the gate is not reversible—there are two inputs and only one output, and knowing the output doesn’t give you complete information about what the inputs were. The same computation can be done reversibly by adding an extra output, a copy of one of the original inputs. Then, using the two outputs, the original inputs can be recovered in a decomputation step. A traditional exclusive-OR (XOR) gate is not reversible—you cannot recover the inputs just by knowing the output. Adding an extra output, just a copy of one of the inputs, makes it reversible. Then, the two outputs can be used to “decompute” the XOR gate and recover the inputs, and with it, the energy used in computation. The idea kept gaining academic traction, and in the 1990s, several students working under MIT’s Thomas Knight embarked on a series of proof-of-principle demonstrations of reversible computing chips. One of these students was Frank. While these demonstrations showed that reversible computation was possible, the wall-plug power usage was not necessarily reduced: Although power was recovered within the circuit itself, it was subsequently lost within the external power supply. That’s the problem that Vaire set out to solve. Computing Reversibly in CMOS Landauer’s limit gives a theoretical minimum for how much energy information erasure costs, but there is no maximum. Today’s CMOS implementations use more than a thousand times as much energy to erase a bit than is theoretically possible. That’s mostly because transistors need to maintain high signal energies for reliability, and under normal operation that all gets dissipated as heat. To avoid this problem, many alternative physical implementations of reversible circuits have been considered, including superconducting computers, molecular machines, and even living cells. However, to make reversible computing practical, Vaire’s team is sticking with conventional CMOS techniques. “Reversible computing is disrupting enough as it is,” says Vaire chief technology officer and cofounder Hannah Earley. “We don’t want to disrupt everything else at the same time.” To make CMOS play nicely with reversibility, researchers had to come up with clever ways to to recover and recycle this signal energy. “It’s kind of not immediately clear how you make CMOS operate reversibly,” Earley says. The main way to reduce unnecessary heat generation in transistor use—to operate them adiabatically—is to ramp the control voltage slowly instead of jumping it up or down abruptly. This can be done without adding extra compute time, Earley argues, because currently transistor switching times are kept comparatively slow to avoid generating too much heat. So, you could keep the switching time the same and just change the waveform that does the switching, saving energy. However, adiabatic switching does require something to generate the more complex ramping waveforms. It still takes energy to flip a bit from 0 to 1, changing the gate voltage on a transistor from its low to high state. The trick is that, as long as you don’t convert energy to heat but store most of it in the transistor itself, you can recover most of that energy during the decomputation step, where any no-longer-needed computation is reversed. The way to recover that energy, Earley explains, is by embedding the whole circuit into a resonator. A resonator is kind of like a swinging pendulum. If there were no friction from the pendulum’s hinge or the surrounding air, the pendulum would swing forever, going up to the same height with each swing. Here, the swing of the pendulum is a rise and fall in voltage powering the circuit. On each upswing, one computational step is performed. On each downswing, a decomputation is performed, recovering the energy. In every real implementation, some amount of energy is still lost with each swing, so the pendulum requires some power to keep it going. But Vaire’s approach paves the way to minimizing that friction. Embedding the circuit in a resonator simultaneously creates the more complex waveforms needed for adiabatic transistor switching and provides the mechanism for recovering the saved energy. The Long Road to Commercial Viability Although the idea of embedding reversible logic inside a resonator has been developed before, no one has yet built one that integrates the resonator on chip with the computing core. Vaire’s team is hard at work on their first version of this chip. The simplest resonator to implement, and the one the team is tackling first, is an inductive-capacitive (LC) resonator, where the role of the capacitor is played by the whole circuit and an on-chip inductor serves to keep the voltage oscillating. The chip Vaire plans to send for fabrication in early 2025 will be a reversible adder embedded in an LC resonator. The team is also working on a chip that will perform the multiply-accumulate operation, the basic computation in most machine learning applications. In the following years, Vaire plans to design the first reversible chip specialized for AI inference. “Some of our early test chips might be lower-end systems, especially power-constrained environments, but not long after that, we’re addressing higher-end markets as well,” Frank says. LC resonators are the most straightforward way to implement in CMOS, but they come with comparatively low quality factors, meaning the voltage pendulum will run with some friction. The Vaire team is also working on integrating a microelectromechanical systems (MEMS) resonator version, which is much more difficult to integrate on chip but promises much higher quality factors (less friction). Earley expects a MEMS-based resonator to eventually provide 99.97 percent friction-free operation. Along the way, the team is designing new reversible logic gate architectures and electronic-design-automation tools for reversible computation. “Most of our challenges will be, I think, in custom manufacturing and hetero-integration in order to combine efficient resonator circuits together with the logic in one integrated product,” Frank says. Earley hopes that these are challenges the company will overcome. “In principle, this allows [us], over the next 10 to 15 years, to get to 4,000x improvement in performance,” she says. “Really it is going to be down to how good a resonator you can get.”
- Build a Better DIY Seismometerby David Schneider on 2. Januara 2025. at 13:00
In September of 2023, I wrote in these pages about using a Raspberry Pi–based seismometer—a Raspberry Shake—to record earthquakes. But as time went by, I found the results disappointing. In retrospect, I realize that my creation was struggling to overcome a fundamental hurdle. I live on the tectonically stable U.S. East Coast, so the only earthquakes I could hope to detect would be ones taking place far away. Unfortunately, the signals from distant quakes have relatively low vibrational frequencies, and the compact geophone sensor in a Raspberry Shake is meant for higher frequencies. I had initially considered other sorts of DIY seismometers, and I was put off by how large and ungainly they were. But my disappointment with the Raspberry Shake drove me to construct a seismometer that represents a good compromise: It’s not so large (about 60 centimeters across), and its resonant frequency (about 0.2 Hertz) is low enough to make it better at sensing distant earthquakes. My new design is for a horizontal-pendulum seismometer, which contains a pendulum that swings horizontally—or almost so, being inclined just a smidge. Think of a fence gate with its two hinges not quite aligned vertically. It has a stable position in the middle, but when it’s nudged, the restoring force is very weak, so the gate makes slow oscillations back and forth. The backbone of my seismometer is a 60-cm-long aluminum extrusion. Or maybe I should call it the keel, as this seismometer also has what I would describe as a mast, another piece of aluminum extrusion about 25 cm long, attached to the end of the keel and sticking straight up. Beneath the mast and attached to the bottom of the keel is an aluminum cross piece, which prevents the seismometer from toppling over. The pendulum—let’s call it a boom, to stick with my nautical analogies—is a 60-cm-long bar cut from 0.375-inch-square aluminum stock. At one end, I attached a 2-pound lead weight (one intended for a diving belt), using plastic cable ties. To allow the boom to swing without undue friction, I drilled a hole in the unweighted end and inserted the carbide-steel tip of a scribing tool. That sharp tip rests against a shallow dimple in a small steel plate screwed to the mast. To support the boom, I used some shifter cable from a bicycle, attached by looping it through a couple of strategically drilled holes and then locking things down using metal sleeves crimped onto the ends of the cable. Establishing the response of the seismometer to vibrations is the role of the end weight [top left] and dampening magnets [top right]. A magnet is also used with a Hall effect sensor [middle right] that is read by a microcontroller [middle left]. Data is stored on a logging board with a real-time clock [bottom]. James Provost I fabricated a few other small physical bits, including leveling feet and a U-shaped bracket to prevent the boom from swinging too far from equilibrium. But the main challenges were how to sense earthquake-induced motions of the boom and how to prevent it from oscillating indefinitely. Most DIY seismometers use a magnet and coil to sense motion as the moving magnet induces a current in the fixed coil. That’s a tricky proposition in a long-period seismometer, because the relative motion of the magnet is so slow that only very faint electrical signals are induced in the coil. One of the more sophisticated designs I saw online called for an LVDT (linear variable differential transformer), but such devices seem hard to come by. Instead, I adopted a strategy I hadn’t seen used in any other homebrewed seismometer: employing a Hall-effect magnetometer to sense position. All I needed was a small neodymium magnet attached to the boom and an inexpensive Hall-effect sensor board positioned beneath it. It worked just great. I figured the immense excursions must reflect some sort of gross malfunction! The final challenge was damping. Without that, the pendulum, once excited, would oscillate for too long. My initial solution was to attach to the boom an aluminum vane immersed in a viscous liquid (namely, oil). That worked, but I could just see the messy oil spills coming. So I tacked in the other direction and built a magnetic damper, which works by having the aluminum vane pass through a strong magnetic field. This induces eddy currents in the vane that oppose its motion. To the eye, it looks like the metal is caught in a viscous liquid. The challenge here is making a nice strong magnetic field. For that, I collected all the neodymium magnets I had on hand, kludged together a U-shaped steel frame, and attached the magnets to the frame, mimicking a horseshoe magnet. This worked pretty well, although my seismometer is still somewhat underdamped. Compared with the fussy mechanics, the electronics were a breeze to construct. I used a US $9 data-logging board that was designed to accept an Arduino Nano and that includes both a real-time clock chip and an SD card socket. This allowed me to record the digital output of the Hall sensor at 0.1-second intervals and store the time-stamped data on a microSD card. My homebrew seismometer recorded the trace of an earthquake occurring roughly 1,500 kilometers away, beginning at approximately 17:27 and ending at 17:37.James Provost The first good test came on 10 November 2024, when a magnitude-6.8 earthquake struck just off the coast of Cuba. Consulting the global repository of shared Raspberry Shake data, I could see that units in Florida and South Carolina picked up that quake easily. But ones located farther north, including one close to where I live in North Carolina, did not. Yet my horizontal-pendulum seismometer had no trouble registering that 6.8 earthquake. In fact, when I first looked at my data, I figured the immense excursions must reflect some sort of gross malfunction! But a comparison with the trace of a research-grade seismometer located nearby revealed that the waves arrived in my garage at the very same time. I could even make out a precursor 5.9 earthquake about an hour before the big one. My new seismometer is not too big and awkward, as many long-period instruments are. Nor is it too small, which would make it less sensitive to far-off seismic signals. In my view, this Goldilocks design is just right.
- 9 Intriguing Engineering Feats for 2025by Kohava Mendelsohn on 1. Januara 2025. at 15:00
This story is part of our Top Tech 2025 special report. Methane Measurements for the Masses From high above us, satellites track devastating emissions of the greenhouse gases that will alter our climate. So far, their data has been private, shared only with companies or governments. MethaneSAT is changing that. Launched on 4 March 2024, it will pinpoint specific problem areas and track emissions of methane more broadly. Anyone will be able to access this data when the satellite is fully operational, in early 2025. Want a sneak peek? You can look right now at data from MethaneAIR, a research jet with the ability to gather about a quarter of the volume of data of MethaneSAT. Cleaning Up Millions of Liters of Radioactive Waste At the Hanford Site in eastern Washington, radioactive nuclear waste from the development of the first atomic bombs is currently leaking into soil and polluting the surrounding environment. Now a cleanup effort, decades in the making, is due to start trapping that waste by turning it into glass. This process, called vitrification, requires temperatures over 1,100 °C, about as hot as lava flowing from a volcano. Waste products are mixed with silica and other materials and heated in underground tanks to form molten glass, which is then poured into containment vessels to become solid glass. Currently, the Hanford Vit Plant is in the “cold commissioning” phase, where the facility is up and running but processing nonradioactive materials as a test. If all goes well, true cleanup will begin in 2025. A Plane Anyone Can Fly On average, it takes 55 hours of in-the-air flight time to get a private pilot license in the United States, and that’s not even counting the weeks of training on the ground. Airhart Aeronautics wants you to be ready to fly a plane in just one hour. Their new personal aircraft, the Airhart Sling, is designed to be user-friendly, safe, and as easy to learn as possible. Using a single stick, pilots simply point in the direction they want to go and the plane follows, even during takeoffs and landings. The Sling’s computer system translates these controls into commands to the engine and flight systems. The first test flight is planned for 2025, with orders shipping to customers in 2026. At an initial price of US $500,000, however, it might be a while before just anyone can fly. The Future of Farming Farmers in India are facing a financial crisis, magnified by debt, lengthy supply chains, and natural disasters. With small plots of about 20,000 square meters making up roughly 80 percent of India’s farms, it’s hard to find a solution that can reach every farmer. Enter Agri Stack. This database, designed by India’s Department of Agriculture and Farmer’s Welfare, will match farmers and their land with government agencies and other companies, helping farmers access money, knowledge, and early natural-disaster warnings. With a standardized protocol called the Unified Farmer Service Interface, agritech companies can design products that they know will be easily integrated into the overall system. By the start of 2025, the government aims to have 60 million farmers registered on its site, with that number growing as the year progresses. A New Reusable Rocket Launcher SpaceX’s Falcon 9 and Falcon Heavy are the only reusable rocket boosters in the world. But a new challenger is arriving: Rocket Lab’s Neutron. Launching in mid-2025, Neutron will be able to launch 13,000 kilograms to low earth orbit or 1,500 kg to Mars or Venus. It will have a reusable booster designed to reenter Earth’s atmosphere and land safely down at its launch site. To be competitive, Neutron is targeting a price of US $50 million per launch, slightly lower than Falcon 9’s $67 million price tag. Profitable Robotaxis Robotaxis promise private, direct, and comfortable rides straight into the future. But amid safety concerns and slow scaling, no robotaxi companies have actually achieved a profit. Nevertheless, Chinese search giant Baidu expects its Apollo Go robotaxis to reach that milestone in 2025. The fleet of about 500 taxis is the largest in China and is expected to double in size with the addition of new taxis in Wuhan by the end of 2024. Baidu has already operated more than 7 million rides. According to the company, key to the service’s profitability is that the new sixth-generation vehicles cost only about US $28,000 to manufacture. Baidu plans to expand into Hong Kong, Singapore, and the Middle East. 30 Years of Java 2025 will be the 30th year of the second most popular programming language in the world, according to our latest Top Programming Languages breakdown. James Gosling released Java in May of 1995, focused on creating a programming language in which it was easy for different devices to communicate with one another. Instead of a typical compiler that translates code to run on a specific computer, Java compilers translate code to bytecode, which can be run on any computer possessing a Java virtual machine. Java virtual machines then decode bytecode into instructions for the device’s specific CPU. This is known colloquially as the “write once, run everywhere” principle, allowing Java to be used widely on the Internet and accessed by many different devices. Want to learn Java? It’s not too late to get started today! More Memory for AI Machines Generative AI needs huge amounts of fast and powerful memory to continue its skyrocketing accomplishments. High-bandwidth memory (HBM), a stack of DRAM dies connected vertically, is a key ingredient for the high-performance GPUs training today’s most powerful AIs. The next generation of high-bandwidth memory is HBM4, which is expected to stack up to 16 memory dies in one module. While its predecessor, HBM3E (the “E” is for “extended”), can technically have stacks up to 16, only stacks of up to 12 have been released. HBM4 will also have a 2,048-bit interface and transmit 1.5 terabytes per second, improving HBM3E’s bandwidth by 33 percent. DRAM makers are expected to begin manufacturing the first HBM4 devices in 2025. A New Moore’s Law Machine Industrial use of extreme-ultraviolet (EUV) lithography, the must-have tool for the most advanced computer chips, has been a thing for barely five years. But the chip industry already needs the next generation—high-numerical-aperture (NA) EUV. This technique increases the range of angles at which the system can manipulate light, leading to even finer resolution. The EUV tool maker ASML and the European research institute Imec have jointly created the first high-NA EUV photolithography lab. They expect chipmakers to use their work to begin mass manufacturing in 2025 or 2026.
- Remembering Former IEEE President Emerson Pughby Amanda Davis on 31. Decembra 2024. at 19:00
Emerson W. Pugh, 1989 IEEE president, died on 8 December at the age of 95. The IEEE Fellow served as president of the IEEE Foundation from 2000 to 2004. “Emerson Pugh was one of the very first IEEE volunteers I met when I joined the IEEE staff in 1997,” says Karen Galuchie, IEEE Foundation executive director. “I will be forever grateful to Emerson for the lessons he taught me, the passion with which he shared his time and talent with IEEE, and the role he played in creating the IEEE Foundation we know today.” Pugh was an active member of the IEEE History Committee, serving as its chair in 1997. In 2009 he worked with the IEEE History Center to create the IEEE STARS (Significant Technological Achievement Recognition Selections) program, an online compendium of invited, peer-reviewed articles on the history of major developments in electrical and computer science and technology. The articles have been incorporated into the Engineering and Technology History Wiki. “Emerson Pugh was the most influential volunteer during my more than 27-year tenure (so far),” says Michael Geselowitz, senior director of the IEEE History Center. “He was able to combine his three passions—engineering, IEEE, and history—by joining the IEEE History Committee.” Pugh worked for 35 years at IBM, where he developed a number of memory technologies for early computer systems. Innovative work at IBM He received bachelor’s and doctoral degrees in physics from Carnegie Tech (now Carnegie Mellon) in 1951 and 1956. Following graduation, he joined the school as an assistant professor of physics. After a year of teaching, he left to join IBM, in Poughkeepsie, N.Y., as a researcher in the metal physics group. In 1958 he was promoted to manager of the group. Pugh was a visiting scientist in 1961 and 1962 at IBM’s Zurich laboratory before relocating to the company’s Watson Research Center, in Yorktown Heights, N.Y. There he led the development of a thin magnetic film memory array used in the IBM System/360, a family of mainframe computer systems that debuted in 1964. In 1965 he was named director of IBM’s operational memory group. Later he served as director of technical planning for the company’s research division. He also was a consultant to IBM’s research director. He took a leave of absence in 1974 to lead a study by the U.S. National Academy of Sciences on motor vehicle emissions and fuel economy. He returned to the company the following year to research memory technologies. He developed bubble memory, a type of nonvolatile computer memory that uses a thin film of a magnetic material to hold small magnetized areas known as bubbles or domains. Each domain stores one bit of data, the smallest unit of digital information. Beginning in the early 1980s, Pugh worked on IBM’s technical history project, authoring or coauthoring four books on the company and its technical developments. He retired in 1993. Decades of service Pugh joined IEEE in the mid-1960s and was an active volunteer. He served as 1973 president of the IEEE Magnetics Society. He was the editor of IEEE Transactions on Magnetics in 1968. He was Division IV director and vice president of IEEE Technical Activities. In 1989 he was elected IEEE president. During his term, he oversaw revisions to the IEEE Code of Ethics and the opening of the IEEE Operations Center, in Piscataway, N.J. The IEEE History Center in 2019 established the Pugh Young Scholar in Residence internship, named after him and his wife, Elizabeth. Students studying the history of technology or engineering can become a research fellow at the center and receive a stipend of US $5,000. Pugh was active in several other organizations. He served on the United Engineering board of trustees, for example, and he was a Fellow of the American Physical Society. Among his recognitions were a 1992 IEEE-USA literary award, the 1991 IEEE Magnetics Society Achievement Award, and a 1990 Carnegie Mellon Alumni Association achievement award.
- In 2025, People Will Try Living in This Underwater Habitatby Liam Critchley on 31. Decembra 2024. at 15:00
The future of human habitation in the sea is taking shape in an abandoned quarry on the border of Wales and England. There, the ocean-exploration organization Deep has embarked on a multiyear quest to enable scientists to live on the seafloor at depths up to 200 meters for weeks, months, and possibly even years. “Aquarius Reef Base in St. Croix was the last installed habitat back in 1987, and there hasn’t been much ground broken in about 40 years,” says Kirk Krack, human diver performance lead at Deep. “We’re trying to bring ocean science and engineering into the 21st century.” This article is part of our special report Top Tech 2025. Deep’s agenda has a major milestone this year—the development and testing of a small, modular habitat called Vanguard. This transportable, pressurized underwater shelter, capable of housing up to three divers for periods ranging up to a week or so, will be a stepping stone to a more permanent modular habitat system—known as Sentinel—that is set to launch in 2027. “By 2030, we hope to see a permanent human presence in the ocean,” says Krack. All of this is now possible thanks to an advanced 3D printing-welding approach that can print these large habitation structures. How would such a presence benefit marine science? Krack runs the numbers for me: “With current diving at 150 to 200 meters, you can only get 10 minutes of work completed, followed by 6 hours of decompression. With our underwater habitats we’ll be able to do seven years’ worth of work in 30 days with shorter decompression time. More than 90 percent of the ocean’s biodiversity lives within 200 meters’ depth and at the shorelines, and we only know about 20 percent of it.” Understanding these undersea ecosystems and environments is a crucial piece of the climate puzzle, he adds: The oceans absorb nearly a quarter of human-caused carbon dioxide and roughly 90 percent of the excess heat generated by human activity. Underwater Living Gets the Green Light This Year Deep is looking to build an underwater life-support infrastructure that features not just modular habitats but also training programs for the scientists who will use them. Long-term habitation underwater involves a specialized type of activity called saturation diving, so named because the diver’s tissues become saturated with gases, such as nitrogen or helium. It has been used for decades in the offshore oil and gas sectors but is uncommon in scientific diving, outside of the relatively small number of researchers fortunate enough to have spent time in Aquarius. Deep wants to make it a standard practice for undersea researchers. The first rung in that ladder is Vanguard, a rapidly deployable, expedition-style underwater habitat the size of a shipping container that can be transported and supplied by a ship and house three people down to depths of about 100 meters. It is set to be tested in a quarry outside of Chepstow, Wales, in the first quarter of 2025. The Vanguard habitat, seen here in an illustrator’s rendering, will be small enough to be transportable and yet capable of supporting three people at a maximum depth of 100 meters.Deep The plan is to be able to deploy Vanguard wherever it’s needed for a week or so. Divers will be able to work for hours on the seabed before retiring to the module for meals and rest. One of the novel features of Vanguard is its extraordinary flexibility when it comes to power. There are currently three options: When deployed close to shore, it can connect by cable to an onshore distribution center using local renewables. Farther out at sea, it could use supply from floating renewable-energy farms and fuel cells that would feed Vanguard via an umbilical link, or it could be supplied by an underwater energy-storage system that contains multiple batteries that can be charged, retrieved, and redeployed via subsea cables. The breathing gases will be housed in external tanks on the seabed and contain a mix of oxygen and helium that will depend on the depth. In the event of an emergency, saturated divers won’t be able to swim to the surface without suffering a life-threatening case of decompression illness. So, Vanguard, as well as the future Sentinel, will also have backup power sufficient to provide 96 hours of life support, in an external, adjacent pod on the seafloor. Data gathered from Vanguard this year will help pave the way for Sentinel, which will be made up of pods of different sizes and capabilities. These pods will even be capable of being set to different internal pressures, so that different sections can perform different functions. For example, the labs could be at the local bathymetric pressure for analyzing samples in their natural environment, but alongside those a 1-atmosphere chamber could be set up where submersibles could dock and visitors could observe the habitat without needing to equalize with the local pressure. As Deep sees it, a typical configuration would house six people—each with their own bedroom and bathroom. It would also have a suite of scientific equipment including full wet labs to perform genetic analyses, saving days by not having to transport samples to a topside lab for analysis. “By 2030, we hope to see a permanent human presence in the ocean,” says one of the project’s principals A Sentinel configuration is designed to go for a month before needing a resupply. Gases will be topped off via an umbilical link from a surface buoy, and food, water, and other supplies would be brought down during planned crew changes every 28 days. But people will be able to live in Sentinel for months, if not years. “Once you’re saturated, it doesn’t matter if you’re there for six days or six years, but most people will be there for 28 days due to crew changes,” says Krack. Where 3D Printing and Welding Meet It’s a very ambitious vision, and Deep has concluded that it can be achieved only with advanced manufacturing techniques. Deep’s manufacturing arm, Deep Manufacturing Labs (DML), has come up with an innovative approach for building the pressure hulls of the habitat modules. It’s using robots to combine metal additive manufacturing with welding in a process known as wire-arc additive manufacturing. With these robots, metal layers are built up as they would be in 3D printing, but the layers are fused together via welding using a metal-inert-gas torch. At Deep’s base of operations at a former quarry in Tidenham, England, resources include two Triton 3300/3 MK II submarines. One of them is seen here at Deep’s floating “island” dock in the quarry. Deep During a tour of the DML, Harry Thompson, advanced manufacturing engineering lead, says, “We sit in a gray area between welding and additive process, so we’re following welding rules, but for pressure vessels we [also] follow a stress-relieving process that is applicable for an additive component. We’re also testing all the parts with nondestructive testing.” Each of the robot arms has an operating range of 2.8 by 3.2 meters, but DML has boosted this area by means of a concept it calls Hexbot. It’s based on six robotic arms programmed to work in unison to create habitat hulls with a diameter of up to 6.1 meters. The biggest challenge with creating the hulls is managing the heat during the additive process to keep the parts from deforming as they are created. For this, DML is relying on the use of heat-tolerant steels and on very precisely optimized process parameters. Engineering Challenges for Long-Term Habitation Besides manufacturing, there are other challenges that are unique to the tricky business of keeping people happy and alive 200 meters underwater. One of the most fascinating of these revolves around helium. Because of its narcotic effect at high pressure, nitrogen shouldn’t be breathed by humans at depths below about 60 meters. So, at 200 meters, the breathing mix in the habitat will be 2 percent oxygen and 98 percent helium. But because of its very high thermal conductivity, “we need to heat helium to 31–32 °C to get a normal 21–22 °C internal temperature environment,” says Rick Goddard, director of engineering at Deep. “This creates a humid atmosphere, so porous materials become a breeding ground for mold”. There are a host of other materials-related challenges, too. The materials can’t emit gases, and they must be acoustically insulating, lightweight, and structurally sound at high pressures. Deep’s proving grounds are a former quarry in Tidenham, England, that has a maximum depth of 80 meters. Deep There are also many electrical challenges. “Helium breaks certain electrical components with a high degree of certainty,” says Goddard. “We’ve had to pull devices to pieces, change chips, change [printed circuit boards], and even design our own PCBs that don’t off-gas.” The electrical system will also have to accommodate an energy mix with such varied sources as floating solar farms and fuel cells on a surface buoy. Energy-storage devices present major electrical engineering challenges: Helium seeps into capacitors and can destroy them when it tries to escape during decompression. Batteries, too, develop problems at high pressure, so they will have to be housed outside the habitat in 1-atmosphere pressure vessels or in oil-filled blocks that prevent a differential pressure inside. Is it Possible to Live in the Ocean for Months or Years? When you’re trying to be the SpaceX of the ocean, questions are naturally going to fly about the feasibility of such an ambition. How likely is it that Deep can follow through? At least one top authority, John Clarke, is a believer. “I’ve been astounded by the quality of the engineering methods and expertise applied to the problems at hand and I am enthusiastic about how DEEP is applying new technology,” says Clarke, who was lead scientist of the U.S. Navy Experimental Diving Unit. “They are advancing well beyond expectations…. I gladly endorse Deep in their quest to expand humankind’s embrace of the sea.”
- The Top 10 AI Stories of 2024by Eliza Strickland on 31. Decembra 2024. at 14:00
IEEE Spectrum‘s most popular AI stories of the last year show a clear theme. In 2024, the world struggled to come to terms with generative AI’s capabilities and flaws—both of which are significant. Two of the year’s most read AI articles dealt with chatbots’ coding abilities, while another looked at the best way to prompt chatbots and image generators (and found that humans are dispensable). In the “flaws” column, one in-depth investigation found that the image generator Midjourney has a bad habit of spitting out images that are nearly identical to trademarked characters and scenes from copyrighted movies, while another investigation looked at how bad actors can use the image generator Stable Diffusion version 1.5 to make child sexual abuse material. Two of my favorites from this best-of collection are feature articles that tell remarkable stories. In one, an AI researcher narrates how he helped gig workers gather and organize data in order to audit their employer. In another, a sociologist who embedded himself in a buzzy startup for 19 months describes how engineers cut corners to meet venture capitalists’ expectations. Both of these important stories bring readers inside the hype bubble for a real view of how AI-powered companies leverage human labor. In 2025, IEEE Spectrum promises to keep giving you the ground truth. 1. AI Prompt Engineering Is Dead David Plunkert Even as the generative AI boom brought fears that chatbots and image generators would take away jobs, some hoped that it would create entirely new jobs—like prompt engineering, which is the careful construction of prompts to get a generative AI tool to create exactly the desired output. Well, this article put a damper on that hope. Spectrum editor Dina Genkina reported on new research showing that AI models do a better job of constructing prompts than human engineers. 2. Generative AI Has a Visual Plagiarism Problem Gary Marcus and Reid Southen via Midjourney The New York Times and other newspapers have already sued AI companies for text plagiarism, arguing that chatbots are lifting their copyrighted stories verbatim. In this important investigation, Gary Marcus and Reid Southen showed clear examples of visual plagiarism, using Midjourney to produce images that looked almost exactly like screenshots from major movies, as well as trademarked characters such as Darth Vader, Homer Simpson, and Sonic the Hedgehog. It’s worth taking a look at the full article just to see the imagery. The authors write: “These results provide powerful evidence that Midjourney has trained on copyrighted materials, and establish that at least some generative AI systems may produce plagiaristic outputs, even when not directly asked to do so, potentially exposing users to copyright infringement claims.” 3. How Good Is ChatGPT at Coding, Really? Getty Images When OpenAI’s ChatGPT first came out in late 2022, people were amazed by its capacity to write code. But some researchers who wanted an objective measure of its ability evaluated its code in terms of functionality, complexity and security. They tested GPT-3.5 (a version of the large language model that powers ChatGPT) on 728 coding problems from the LeetCode testing platform in five programming languages. They found that it was pretty good on coding problems that had been on LeetCode before 2021, presumably because it had seen those problems in its training data. With more recent problems, its performance fell off dramatically: Its score on functional code for easy coding problems dropped from 89 percent to 52 percent, and for hard problems it dropped from 40 percent to 0.66 percent. It’s worth noting, though, that the OpenAI models GPT-4 and GPT-4o are superior to the older model GPT-3.5. And while general-purpose generative AI platforms continue to improve at coding, 2024 also saw the proliferation of increasingly capable AI tools that are tailored for coding. 4. AI Copilots Are Changing How Coding Is Taught Alamy That third story on our list perfectly sets up the fourth, which takes a good look at how professors are altering their approaches to teaching coding, given the aforementioned proliferation of coding assistants. Introductory computer science courses are focusing less on coding syntax and more on testing and debugging, so students are better equipped to catch mistakes made by their AI assistants. Another new emphasis is problem decomposition, says one professor: “This is a skill to know early on because you need to break a large problem into smaller pieces that an LLM can solve.” Overall, instructors say that their students’ use of AI tools is freeing them up to teach higher-level thinking that used to be reserved for advanced classes. 5. Shipt’s Algorithm Squeezed Gig Workers. They Fought Back Mike McQuade This feature story was authored by an AI researcher, Dana Calacci, who banded together with gig workers at Shipt, the shopping and delivery platform owned by Target. The workers knew that Shipt had changed its payment algorithm in some mysterious way, and many had seen their pay drop, but they couldn’t get answers from the company—so they started collecting data themselves. When they joined forces with Calacci, he worked with them to build a textbot so workers could easily send screenshots of their pay receipts. The tool also analyzed the data, and told each worker whether they were getting paid more or less under the new algorithm. It found that 40 percent of workers had gotten an unannounced pay cut, and the workers used the findings to gain media attention as they organized strikes, boycotts, and protests. Calacci writes: “Companies whose business models rely on gig workers have an interest in keeping their algorithms opaque. This “information asymmetry” helps companies better control their workforces—they set the terms without divulging details, and workers’ only choice is whether or not to accept those terms.... There’s no technical reason why these algorithms need to be black boxes; the real reason is to maintain the power structure.” 6. 15 Graphs That Explain the State of AI in 2024 IEEE Spectrum Like a couple of Russian nesting dolls, here we have a list within a list. Every year Stanford puts out its massive AI Index, which has hundreds of charts to track trends within AI; chapters include technical performance, responsible AI, economy, education, and more. And for the past four years, Spectrum has read the whole thing and pulled out those charts that seem most indicative of the current state of AI. In 2024, we highlighted investment in generative AI, the cost and environmental footprint of training foundation models, corporate reports of AI helping the bottom line, and public wariness of AI. 7. A New Type of Neural Network Is More Interpretable iStock Neural networks have been the dominant architecture in AI since 2012, when a system called AlexNet combined GPU power with a many-layered neural network to get never-before-seen performance on an image-recognition task. But they have their downsides, including their lack of transparency: They can provide an answer that is often correct, but can’t show their work. This article describes a fundamentally new way to make neural networks that are more interpretable than traditional systems and also seem to be more accurate. When the designers tested their new model on physics questions and differential equations, they were able to visually map out how the model got its (often correct) answers. 8. AI Takes On India’s Most Congested City Edd Gent The next story brings us to the tech hub of Bengaluru, India, which has grown faster in population than in infrastructure—leaving it with some of the most congested streets in the world. Now, a former chip engineer has been given the daunting task of taming the traffic. He has turned to AI for help, using a tool that models congestion, predicts traffic jams, identifies events that draw big crowds, and enables police officers to log incidents. For next steps, the traffic czar plans to integrate data from security cameras throughout the city, which would allow for automated vehicle counting and classification, as well as data from food delivery and ride sharing companies. 9. Was an AI Image Generator Taken Down for Making Child Porn? Mike Kemp/Getty Images In another important investigation exclusive to Spectrum, AI policy researchers David Evan Harris and Dave Willner explained how some AI image generators are capable of making child sexual abuse material (CSAM), even though it’s against the stated terms of use. They focused particularly on the open-source model Stable Diffusion version 1.5, and on the platforms Hugging Face and Civitai that host the model and make it available for free download (in the case of Hugging Face, it was downloaded millions of times per month). They were building on prior research that has shown that many image generators were trained on a data set that included hundreds of pieces of CSAM. Harris and Willner contacted companies to ask for responses to these allegations and, perhaps in response to their inquiries, Stable Diffusion 1.5 promptly disappeared from Hugging Face. The authors argue that it’s time for AI companies and hosting platforms to take seriously their potential liability. 10. The Messy Reality Behind a Silicon Valley Unicorn The Voorhes What happens when a sociologist embeds himself in a San Francisco startup that has just received an initial venture capital investment of $4.5 million and quickly shot up through the ranks to become one of Silicon Valley’s “unicorns” with a valuation of more than $1 billion? Answer: You get a deeply engaging book called Behind the Startup: How Venture Capital Shapes Work, Innovation, and Inequality, from which Spectrum excerpted a chapter. The sociologist author, Benjamin Shestakofsky, describes how the company that he calls AllDone (not its real name) prioritized growth at all costs to meet investor expectations, leading engineers to focus on recruiting both staff and users rather than doing much actual engineering. Although the company’s whole value proposition was that it would automatically match people who needed local services with local service providers, it ended up outsourcing the matching process to a Filipino workforce that manually made matches. “The Filipino contractors effectively functioned as artificial artificial intelligence,” Shestakofsky writes, “simulating the output of software algorithms that had yet to be completed.”
- IEEE Continues to Strengthen Its Research Integrity Processby Kathy Pretz on 30. Decembra 2024. at 19:00
Ensuring the integrity of the research IEEE publishes is crucial to maintaining the organization’s credibility as a scholarly publisher. IEEE produces more than 30 percent of the world’s scholarly literature in electrical engineering, electronics, and computer science. In fact, the 50 top-patenting companies cite IEEE nearly three times more than any other technical-literature publisher. With the volume of academic paper submissions increasing over the years, IEEE is continuously evolving its publishing processes based on industry best practices to help detect papers with issues of concern. They include plagiarism, inappropriate citations, author coercion by editors or reviewers, and the use of artificial intelligence to generate content without proper disclosure. “Within the overall publishing industry, there are now many more types of misconduct and many more cases of violations, so it has become essential for all technical publishers to deal seriously with them,” says IEEE Fellow Sergio Benedetto, vice president of IEEE Publication Services and Products. “Authors are also more careful to choose publishers that are serious about addressing misconduct. It has become important not only for the integrity of research but also for the publishing business,” adds Benedetto, a member of the IEEE Board of Directors. “It’s important to understand that the IEEE is not blind to this problem but rather is investing heavily in trying to solve it,” says Steven Heffner, managing director of IEEE Publications. “We’re investing in technology around detection and investigation. We’re also investing with partners to develop their technologies so that we can help the whole industry scale up a detection system. And we’re also investing in staff resources.” New ways to rig the system Some of the root causes driving the misconduct have to do with incentives offered to authors to encourage them to publish more, Heffner says. “Promotions and tenure are tied to publishing research, and the old ‘publish or perish’ imperative is still in operation,” he says. “There are now more ways for scholars to be evaluated through technology tracking the number of times their work has been cited, and through the use of bibliometrics,” a statistical method that counts the number of publications and citations by an author or researcher. Those statistics are used to validate the value their work brings to their organization. “Even more sophisticated ways are being used to manipulate the system of bibliometrics,” Heffner says, “such as citation pseudo cartels that operate in a quid-pro-quo way of ‘I’ll cite you if you cite me.’ “Unfortunately, these are all creating more opportunities for people to abuse the system.” Other activities on the rise include paper mills run by for-profit companies, which create fake journal articles that appear to be genuine research and then sell authorship to would-be scholars. “Within the overall publishing industry, there are now many more types of misconduct and many more cases of violations, so it has become essential for all technical publishers to deal seriously with them.” —Sergio Benedetto “I think the paper mill is the most dangerous at-scale problem we’ve got,” Heffner says. “But the old crimes such as plagiarism still persist, and in some ways are getting harder to spot.” Benedetto says some fraudulent authors are making up the names and websites of reviewers, so their articles get accepted without undergoing peer review. It’s a serious issue, he says. “I don’t think IEEE is unique in its experience in this phenomenon of misconduct,” he says. “Several commercial publishers and many in fields outside of technology are seeing the same problems.” Addressing author misconduct The IEEE PSPB Publishing Conduct Committee, which handles editorial misconduct cases, treats violations of its publication process as major infractions. “IEEE’s volunteers are particularly strong on developing policies,” Heffner says. “We need that governance, but we also need their expertise as people who are participants in the endeavor of science.” Benedetto says IEEE is serious about finding questionable papers and approaches, and it has launched several initiatives. IEEE checks all content submitted to journals for plagiarism. A systematic, real-time analysis of data during the publication process helps identify potential wrongdoing. Reviewers of papers are required to include recommended references on their review form to monitor for high bibliometrics. The organization’s peer review platform works to identify possible misuse by reviewers and editors. It monitors for biased reviews, conflicts of interest, plagiarism, and tracking reviewer activity to identify patterns that might indicate inappropriate behavior. The names of authors and editors are compared against a list of prohibited participants, people who have violated IEEE publishing principles and can no longer publish in the organization’s journals. Some unscrupulous authors are using artificial intelligence to game the system, Heffner says. “With the advent of generative AI, completely fraudulent papers can be made more quickly and look more convincingly authentic,” he says. That leads to concerns about the data’s validity. A new policy addresses the use of AI by authors and peer reviewers. Authors who use AI for creating text or other work in their articles must clearly identify the portions and provide appropriate references to the AI tool. Reviewers are not permitted to load manuscripts into an AI-based large language model to generate their reviews, nor may they use AI to write them. Anyone who suspects misconduct of any type—including inappropriate citations, use of AI, and plagiarism—can file a complaint using the IEEE Ethics Reporting Line. It is available seven days a week, 24 hours a day. An independent third party manages the process, and the information provided is sent to IEEE on a confidential basis and, if requested, anonymously. Types of corrective actions Should an author or reviewer be suspected of misconduct, a case is opened and a detailed analysis is performed. An independent committee reviews the information and, if warranted, begins an investigation. The alleged offender is allowed to respond to the allegations. If the offender is found guilty, several sanctions can be applied. Depending on the severity of the violation, an escalating system of sanctions is used. Individuals who plagiarize content at a severe enough level are restricted from editorial duties and publishing, and their names get added to the prohibited participants list (PPL) database. Those on the list may not participate in any IEEE publication–related activities, including conferences. They also are removed from any editorial positions they hold. IEEE has strengthened its article retraction and removal policies. When an article is flagged, the author receives an expression of concern. Unreliable data could result from an honest error or from research misconduct. IEEE considers retraction a method of correcting the literature. When there are issues with the content, it takes the appropriate level of care and time in the review and, if necessary, retracts nonconforming publications. The retraction notices alert readers to those publications. Retractions also are used to alert readers to cases of redundant publication, plagiarism, and failure to disclose competing interests likely to have influenced interpretations or recommendations. In the most severe cases, articles are removed. Retracted articles are not removed from printed copies of the publication or electronic archives, but their retracted status and the reason for retraction are explained. IEEE’s corrective actions for publishing misconduct used to be focused on restricting authorship, but they now include restrictions on editorial roles such as peer reviewer, editor, conference organizer, and conference publication officer. Their names also are added to the PPL, and they can be prohibited from publishing with IEEE. Industry-wide efforts to detect misconduct IEEE and other scientific, technical, and medical (STM) publishers have joined forces to launch pilot programs aimed at detecting simultaneous submissions of suspicious content across publishers, Heffner says. They are working on developing an STM Integrity Hub, a powerful submission screening tool that can flag tactics related to misconduct, including paper mills. The publishers also are developing custom AI and machine learning–based tools to screen submissions and those articles that have undergone peer review in real time. Some of the tools have already been rolled out. Benedetto says he is working on a process for sharing IEEE’s list of prohibited participants with other publishers. “Those found guilty of misconduct simply go to other publishers,” he says. “Each publisher has its own list, but those aren’t shared with others, so it has become very simple for a banned author to change publishers to get around the ban. A shared list of misconduct cases would prevent those who are found guilty from publishing in all technical journals during the period of their sentence.” “We are all working together to share information and to share best practices,” Heffner says, “so that we can fight this as a community of publishers that take their stewardship of the scholarly record seriously.” “Some colleagues or authors think that misconduct may be a shortcut to build a better career or reach publication targets more easily,” Benedetto says. “This is not true. Misconduct is not a personal issue. It’s an issue that can and does build mistrust toward institutions, publishers, and journals. “IEEE will continue to strengthen its efforts to combat publication misconduct cases because we believe that research integrity is the basis of our business. If readers lose trust in our journals and authors, then they lose trust in the IEEE itself.”
- Why China Is Building a Thorium Molten-Salt Reactorby Yu-Tzu Chiu on 30. Decembra 2024. at 15:00
After a half-century hiatus, thorium has returned to the front lines of nuclear power research as a source of fuel. In 2025, China plans to start building a demonstration thorium-based molten-salt reactor in the Gobi Desert. The 10-megawatt reactor project, managed by the Chinese Academy of Sciences’ Shanghai Institute of Applied Physics (SINAP), is scheduled to be operational by 2030, according to an environmental-impact report released by the Academy in October. The project follows a 2-MW experimental version completed in 2021 and operated since then. This article is part of our special report Top Tech 2025. China’s efforts put it at the forefront of both thorium-based fuel breeding and molten-salt reactors. Several companies elsewhere in the world are developing plans for this kind of fuel or reactor, but none has yet operated one. Prior to China’s pilot project, the last operating molten-salt reactor was Oak Ridge National Laboratory’s Molten Salt Reactor Experiment, which ran on uranium. It shut down in 1969. Thorium-232, found in igneous rocks and heavy mineral sands, is more abundant on Earth than the commonly used isotope in nuclear fuel, uranium-235. But this weakly radioactive metal isn’t directly fissile–it can’t undergo fission, the splitting of atomic nuclei that produces energy. So it must first be transformed into fissile uranium-233. That’s technically feasible, but whether it’s economical and practical is less clear. China’s Thorium-Reactor Advances The attraction of thorium is that it can help achieve energy self-sufficiency by reducing dependence on uranium, particularly for countries such as India with enormous thorium reserves. But China may source it in a different way: The element is a waste product of China’s huge rare earth mining industry. Harnessing it would provide a practically inexhaustible supply of fuel. Already, China’s Gansu province has maritime and aerospace applications in mind for this future energy supply, according to the state-run Xinhua News Agency. Scant technical details of China’s reactor exist, and SINAP didn’t respond to IEEE Spectrum’s requests for information. The Chinese Academy of Sciences’ environmental-impact report states that the molten-salt reactor core will be 3 meters in height and 2.8 meters in diameter. It will operate at 700 °C and have a thermal output of 60 MW, along with 10 MW of electricity. Molten-salt breeder reactors are the most viable designs for thorium fuel, says Charles Forsberg, a nuclear scientist at MIT. In this kind of reactor, thorium fluoride dissolves in molten salt in the reactor’s core. To turn thorium-232 into fuel, it is irradiated to thorium-233, which decays into an intermediate, protactinium-233, and then into uranium-233, which is fissile. During this fuel-breeding process, protactinium is removed from the reactor core while it decays, and then it is returned to the core as uranium-233. Fission occurs, generating heat and then steam, which drives a turbine to generate electricity. But many challenges come along with thorium use. A big one is dealing with the risk of proliferation. When thorium is transformed into uranium-233, it becomes directly usable in nuclear weapons. “It’s of a quality comparable to separated plutonium and is thus very dangerous,” says Edwin Lyman, director of nuclear power safety at the Union of Concerned Scientists in Washington, D.C. If the fuel is circulating in and out of the reactor core during operation, this movement introduces routes for the theft of uranium-233, he says. Thorium Fuel Charms Nuclear-Power Sector Most groups developing molten-salt reactors are focused on uranium or uranium mixtures as a fuel, at least in the short term. Natura Resources and Abilene Christian University, both in Abilene, Texas, are collaborating on a 1-MW liquid-molten-salt reactor after receiving a construction permit in September from the U.S. Nuclear Regulatory Commission. Kairos Power is developing a fluoride-salt-cooled, high-temperature reactor in Oak Ridge, Tenn., that will use uranium-based tri-structural isotropic (TRISO) particle fuel. The company in October inked a deal with Google to provide a total of 500 MW by 2035 to power its data centers. But China isn’t alone in its thorium aspirations. Japan, the United Kingdom, and the United States, in addition to India, have shown interest in the fuel at one point or another. The proliferation issue doesn’t seem to be a showstopper, and there are ways to mitigate the risk. Denmark’s Copenhagen Atomics, for example, currently aims to develop a thorium-based molten-salt reactor, with a 1-MW pilot planned for 2026. The company plans to weld it shut so that would-be thieves would have to break open a highly radioactive system to get at the weapon-ready material. Chicago-based Clean Core Thorium Energy developed a blended thorium and enriched uranium (including high-assay low-enriched uranium, or HALEU) fuel, which they say can’t be used in a weapon. The fuel is designed for heavy-water reactors. Political and technical hurdles may have largely sidelined thorium fuel and molten-salt-reactor research for the last five decades, but both are definitely back on the drawing table.
- Andrew Ng: Unbiggen AIby Eliza Strickland on 9. Februara 2022. at 15:31
Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A. Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias. Andrew Ng on... What’s next for really big models The career advice he didn’t listen to Defining the data-centric AI movement Synthetic data Why Landing AI asks its customers to do the work The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way? Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions. When you say you want a foundation model for computer vision, what do you mean by that? Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them. What needs to happen for someone to build a foundation model for video? Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision. Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries. Back to top It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users. Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation. “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.” —Andrew Ng, CEO & Founder, Landing AI I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince. I expect they’re both convinced now. Ng: I think so, yes. Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.” Back to top How do you define data-centric AI, and why do you consider it a movement? Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data. When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline. The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up. You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them? Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn. When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set? Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system. “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.” —Andrew Ng For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance. Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training? Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle. One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way. When you talk about engineering the data, what do you mean exactly? Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity. For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow. Back to top What about using synthetic data, is that often a good solution? Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development. Do you mean that synthetic data would allow you to try the model on more data sets? Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category. “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.” —Andrew Ng Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data. Back to top To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment? Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data. One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory. How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up? Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations. In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists? So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work. Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains. Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement? Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it. Back to top This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”
- How AI Will Change Chip Designby Rina Diane Caballar on 8. Februara 2022. at 14:00
The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process. Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version. But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform. How is AI currently being used to design the next generation of chips? Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider. Heather GorrMathWorks Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI. What are the benefits of using AI for chip design? Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design. So it’s like having a digital twin in a sense? Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end. So, it’s going to be more efficient and, as you said, cheaper? Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering. We’ve talked about the benefits. How about the drawbacks? Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years. Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together. One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge. How can engineers use AI to better prepare and extract insights from hardware or sensor data? Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start. One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI. What should engineers and designers consider when using AI for chip design? Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team. How do you think AI will affect chip designers’ jobs? Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip. How do you envision the future of AI and chip design? Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.
- Atomically Thin Materials Significantly Shrink Qubitsby Dexter Johnson on 7. Februara 2022. at 16:12
Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality. IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability. Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100. “We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.” The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit. Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C). Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another. As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance. In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates. “We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics. On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas. While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor. “What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.” This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits. “The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang. Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.