IEEE Spectrum IEEE Spectrum
- Application of WiPL-D Pro CAD to Flexible Antennasby WIPL-D on 7. Juna 2023. at 12:32
This whitepaper demonstrates the application of a new powerful Wrap command of WiPL-D Pro CAD EM simulator which enables the analysis of flexible and wearable circuits. Download this free whitepaper now! The Wrap command is versatile, user friendly and allows simple and effective transformation of flat sheet bodies to bodies wrapped over arbitrary complex developable surfaces. This has been demonstrated in the examples of a flat microstrip patch antenna wrapped around the cylinder aligned along the x-axis and y-axis comparing their effect on antenna performance.
- Explainer: Why No-Code Software Isn't Just For Developersby Dina Genkina on 5. Juna 2023. at 20:00
Dina Genkina: Hi. I’m Dina Genkina for IEEE Spectrum‘s Fixing the Future. This episode is brought to you by IEEE Explore. The digital library with over 6 million pieces of the world’s best technical content. In the November issue of IEEE Spectrum, one of our most popular stories was about code that writes its own code. Here to probe a little deeper is the author of that article, Craig Smith. Craig is a former New York Times correspondent and host of his own podcast, Eye On AI. Welcome to the podcast, Craig. Craig Smith: Hi. Genkina: Thank you for joining us. So you’ve been doing a lot of reporting on these new artificial intelligence models that can write their own code to whatever capacity that they can do that. So maybe we can start by highlighting a couple of your favorite examples, and you can explain a little bit about how they work. Smith: Yeah. Absolutely. First of all, the reason I find this so interesting is that I don’t code myself. And I’ve been talking to people for a couple of years now about when artificial intelligence systems will get to the point that I can talk to them, and they’ll write a computer program based on what I’m asking them to do, and it’s an idea that’s been around for a long time. And one thing is a lot of people think this exists already because they’re used to talking to Siri or Alexa or Google Assistant on some other virtual assistant. And you’re not actually writing code when you talk to Siri or Alexa or Google Assistant. That changed when they built GPT-3, the successor to GPT-2, which was a much larger language model. And these large language models are trained on huge corpuses of data and based primarily on something called a transformer algorithm. They were really focused on text. On human natural language. But kind of a side effect was that there’s a lot of HTML code out on the internet. And GPT-3 it turns out learned how HTML code just as it learned English natural language. The first application of these large language models’ ability to write code has been first by GitHub. Together with OpenAI and Microsoft, they created a product called Copilot. And it’s pair programming. I mean, oftentimes when programmers are writing code, they have someone— they work in teams. In pairs. And one person writes kind of the initial code and the other person cleans it up or checks it and tests it. And if you don’t have someone to work with, then you have to do that yourself, and it takes twice as long. So GitHub created this thing based on GPT-3 called Copilot, and it acts as that second set of hands. And so when you begin to write a line of code, it’ll autocomplete that line, just as it happens with Microsoft Word now or any Word processing program. And then the coder can either accept or modify or delete that suggestion. GitHub recently did a survey and found that coders can code twice as fast using Copilot to help autocomplete their code than if they were working on their own. Genkina: Yeah. So maybe we could put a bit of a framework to this. So I guess programming in its most basic form like back in the old days used to be with these punch cards, right? And when you get down to what you’re telling the computer to do, it’s all ones and zeros. So the base way to talk to a computer is with ones and zeros. But then people developed more complicated tools so that programmers don’t have to sit around and type ones and zeros all day long. And programming languages and their simpler programming languages are slightly more sophisticated, higher-level programming languages so to speak. And they’re kind of closer to words, although definitely not natural language. But they will use some words, but they still have to follow this somewhat rigid logical structure. So I guess one way to think about it is that these tools are kind of moving on to the next level of abstraction above that, or trying to do so. Smith: That’s right. And that started really in the forties, or I guess in the fifties at a company called Remington Rand. Remington Rand. A woman named Grace Hopper introduced a programming language that used English language vocabulary. So that instead of having to write in symbols, mathematic symbols, the programmers could write import, for example, to ingest some other piece of code. And that has started this ladder of increasingly efficient languages to where we are today with things like Python. I mean, they’re primarily English language words and different kinds of punctuation. There isn’t a lot of mathematical notation in them. So what’s happened with these large language models, what happened with HTML code and is now happening with other programming languages, is that you’re able to speak to them instead of— as with CodeWhisperer or Copilot, where you write in computer code or programming language and the system autocompletes what you started writing, you can write in natural language and the computer will interpret that and write the code associated with it. And that opens up this vista of what I’m dreaming of, of being able to talk to a computer and have it write a program. The problem with that is that, as I was saying, natural language is so imprecise that you either need to learn to speak or write in a very constrained way for the computer to understand you. Even then, there’ll be ambiguities. So there’s a group at Microsoft that has come up with this system called T coder. It’s just a research paper now. It hasn’t been productized. But the computer, you tell it that you want it to do something in very spare, imprecise language. And the computer will see that there are several ways to code that phrase, and so the computer will come back and ask for clarification of what you mean. And that interaction, that back-and-forth, then refines the meaning or the intent of the person who’s talking or writing instructions to the computer to the point that it’s adequately precise, and then the computer generates the code. So I think eventually there will be very high-level data scientists that learn coding languages, but it opens up software development to a large swath of people who will no longer need to know a programming language. They’ll just need to understand how to interact with these systems. And that will require them to understand, as you were saying at the onset, the logical flow of a program and the syntax of programs, of programming languages and be aware of the ambiguities in natural language. And some of that’s already finding its way into products. There’s a company called Akkio that has a no-code platform. It’s primarily a drag-and-drop interface. And it works on tabular data primarily. But you drag in a spreadsheet and drop it into their interface, and then you click a bunch of buttons on what you want to train the program on. What you want the program to predict. These are predictive models. And then you hit a button, and it trains the program. And then you feed it your untested data, and it will make the predictions on that data. It’s used for a lot of fascinating things. Right now, it’s being used in the political sphere to predict who in a list of 20,000 contacts will donate to a particular party or campaign. Contacts will donate to a particular political party or campaign. So it’s really changing political fundraising. And Akkio has just come out with a new feature which I think you’ll start seeing in a lot of places. One of the issues in working with data is cleaning it up. Getting rid of outliers. Rationalizing the language. You may have a column where some things are written out in words. Other things are numbers. You need to get them all into numbers. Things like that. That kind of clean-up is extremely time-consuming and tedious. And Akkio has a large— well, they’ve actually tapped into a large language model. So they’re using a large language model. It’s not their model. But you just write in natural language into the interface what you want done. You want to combine three columns that give the date, the time, and the month and year. I mean, the day of the week, the month, the year. The month and the year. You want to combine that into a single number so that the computer can deal with it more easily. You can just tell the interface by writing in simple English what you want. And you can be fairly imprecise in your English, and the large language model will understand what you mean. So it’s an example of how this new ability is being implemented in products. I think it’s pretty amazing. And I think you’ll see that spread very quickly. I mean, this is all a long way from my talking to a computer and having it create a complicated program for me. These are still very basic. Genkina: Yeah. So you mention in your article that this isn’t actually about to put coders out of a job, right? So is it just because you think it’s not there yet. The technologies not at that level? Or is that fundamentally not what’s happening in your view? Smith: Well, the technology certainly isn’t there yet. It’s going to be a very long time before— well, I don’t know that it’s going to be a long time because things have moved so quickly. But it’ll be a while yet, before you’ll be able to speak to a computer and have it write complex programs. But what will happen and will happen, I think, fairly quickly is with things like AlphaCode in the background, things like T coder that interacts with the user, that people won’t need to learn computer programming languages any longer in order to code. They will need to understand the structure of a program, the logic and syntax, and they’ll have to understand the nuances and ambiguities in natural language. I mean, if you turned it over to someone who wasn’t aware of any of those things, I think it would not be very effective. But I can see that computer science students will learn C++ and Python because you learn the basics in any field that you’re going into. But the actual application will be through natural language working with one of these interactive systems. And what that allows is just a much broader population to get involved in programming and developing software. And we really need that because there is a real shortage of capable computer programmers and coders out there. The world is going through this digital transformation. Every process is being turned into software. And there just aren’t enough people to do that. That’s what’s holding that transformation back. So as you broaden the population of people that can do that, more software will be developed in a shorter period of time. I think it’s very exciting. Genkina: So maybe we can get into a little bit of the copyright issues surrounding this because for example, GitHub Copilot sometimes spits out bits of code that are found in the training data that it was trained on. So there’s a pool of training data from the internet like you mentioned in the beginning and the output of this program the auto-completer suggests is some combination of all the inputs maybe put together in a creative way, but sometimes just straight copies of bits of code from the input. And some of these input bits of code have copyright licenses. Yeah. Yeah. That’s interesting. I remember when sampling started in the music industry. And I thought it would be impossible to track down the author of every bit of music that was sampled and work out some kind of a licensing deal that would compensate the original artist. But that’s happened, and people are very quick to spot samples that use their original music if they haven’t been compensated. In this realm, to me, it’s a little different. It’ll be interesting to see what happens. Because the human mind ingests data and then produces theoretically original thought, but that thought is really just a jumble of everything that you’ve ingested. Yeah. I had this conversation recently about whether the human mind is really just a large language model that has trained on all of the information that it’s been exposed to. And it seems to me that, on the one hand, it’s impossible to trace every input for any particular output as these systems get larger. And I just think it’s an unreasonable to expect every piece of human creative output to be copyrighted and tracked through all of the various iterations that it goes through. I mean, you look at the history of art. Every artist in the visual arts is drawing on his predecessors and using ideas and things to create something new. I haven’t looked in any particular cases where it’s glaring that the code or the language is clearly identifiable is coming from one source. I don’t know how to put it. I think the world is getting so complex that creative output, once it’s out there unless something like sampling for music where it’s clearly identifiable, that it’s going to be impossible to credit and compensate everyone whose output became an input to that computer program. Genkina: My next question was about who should get paid for code by these big AIs, but I guess you kind of suggested a model where all the training data get a little bit of— everyone responsible for the training data would get a little bit of royalties for every use. I guess, long term that’s probably not super viable because a few generations from now there’s going to be no one that contributed to the training data. Smith: Yeah. But that is interesting, who owns these models that are written by a computer. It’s something I really haven’t thought about. And I don’t know if you’ll cut this out, but have you read anything about that topic? About who will own— if AlphaCode becomes a product, deep mines AlphaCode, and it writes a program that becomes extremely useful and is used around the world and generates potentially a lot of revenue, who owns that model? I don’t know. Genkina: So what is your expectation for what do you think will happen in this arena in the coming 5 to 10 years or so? Smith: Well, in terms of auto-generated code, I think it’s going to progress very quickly. I mean, transformers came out in 2017, I think. And two years later, you have AlphaCode writing complete programs from natural language. And now you have T coder in the same year with a system that refines the natural language intent. I think in five years, yeah, we’ll be able to write basic software programs from speech. It’ll take much longer to write something like GPT-3. That’s a very, very complicated program. But the more that these algorithms are commoditized, the more I think combining them will be easier. So In 10 years, yeah, I think it’s possible that you’ll be able to talk to a computer. And again, not an untrained person, but a person that understands how programming works and program a fairly complex program. It kind of builds on itself this cycle because the more people that can participate in development that on the one hand creates more software, but it also frees up sort of the high-level data scientists to develop novel algorithms and new systems. And so I see it as accelerating and it’s an exciting time. [music] Genkina: Today on Fixing the Future, we spoke to Craig Smith about AI-generated code. I’m Dina Genkina for IEEE Spectrum and I hope you’ll join us next time on Fixing the Future.
- Get to Know the IEEE Board of Directorsby IEEE on 5. Juna 2023. at 18:00
The IEEE Board of Directors shapes the future direction of IEEE and is committed to ensuring IEEE remains a strong and vibrant organization—serving the needs of its members and the engineering and technology community worldwide—while fulfilling the IEEE mission of advancing technology for the benefit of humanity. This article features IEEE Board of Directors members Jill Gostin, Stephanie White, and Yu Yuan. IEEE Senior Member Jill Gostin Director and Vice President, Member and Geographic Activities Jill Gostin, an IEEE senior member, is director and vice president of IEEE Member and Geographic Activities.Nathan Gostin Gostin is a dedicated mathematician and community leader whose work centers around systems engineering, algorithm assessment, and software testing and evaluation, specifically related to sensor systems. She is a principal research scientist in applied research programs pertaining to sensors and electromagnetic applications. Her current work focuses on open architecture sensor systems, which allow systems to reuse existing technologies, providing the flexibility to quickly refresh an existing component of the system or swap in new technologies. Gostin uses a model-based systems engineering approach to develop the open architecture and the associated standard. By providing a standard to define the interfaces between components of the system, modifications and innovations can be quickly and easily incorporated. Gostin, an active IEEE volunteer, has served on the IEEE Future Directions Committee, on the Board of Governors of the IEEE Computer Society and the IEEE Aerospace and Electronic Systems Society, and as vice president of finance for the IEEE Sensors Council, among many other IEEE roles. She believes in leading by example and says it is important to help others in advancing their career paths. Through the IEEE Computer Society, she was a representative to IEEE’s Women in Engineering program, which works to increase the representation of women in engineering disciplines. Gostin has also served as a STEM mentor to middle and high school math and science classes; and as a panelist for discussions on women in technology. She has authored or co-authored multiple technical papers and has received multiple technical and service awards. In 2016, she was named Georgia’s Women in Technology Woman of the Year for mid-size businesses, an award recognizing women technology executives for their accomplishments as leaders in business, as visionaries of technology, and who make a difference in their community. IEEE Life Senior Member Stephanie White Director, Division X IEEE Life Senior Member Stephanie White is director of IEEE Division X.William Pallack White is an educator, technical leader, corporate manager, and entrepreneur. She is a pioneer in software and system requirements engineering—making significant and lasting contributions in the behavior modeling, requirements semantics, and requirements analysis fields, resulting in less costly and safer cyber-physical systems. As a principal engineer of requirements and architecture, White was responsible for detecting errors in requirements on eight multi-million-dollar aircraft and space programs, producing higher quality specifications with lower cost and risk. Recognizing the need for verifiable methods that practicing engineers can use, she created scalable and practical modeling and analytic techniques based on formal methods. Her methods were used to ensure the correctness of aircraft and space programs. Addressing the need for research in engineering systems where computer systems have an essential role, she founded the IEEE Technical Committee on Engineering of Computer-Based Systems in 1990. This area of research is now known as cyber-physical systems engineering. White, a lifelong IEEE volunteer, has held many positions, including president of the IEEE Systems Council and vice president of technical activities for the IEEE Computer Society (also serving on its board of governors from 2006 to 2008). She wants to use her current position within IEEE to improve the return on members’ investment, broaden IEEE’s membership base, and advance technology for humanity. Currently a senior professor emeritus, White has taught systems science, systems engineering, and computer science. She still participates in dissertation committees. White received the 2013 IEEE-USA Divisional Professional Leadership Award for inspiring women to study and work in the STEM fields and for leadership in diversity initiatives. IEEE Senior Member Yu Yuan Director and President, IEEE Standards Association An IEEE Senior Member, Yu Yuan is director and president of the IEEE Standards Association.Yu Yuan Yuan is a scientist, inventor, and entrepreneur. His work in consumer technology, multimedia, virtual reality, the Internet of Things, and digital transformation has significantly impacted industry and society. His current work focuses on developing technologies, infrastructures, ecosystems, and resources needed for massively multiplayer ultra-realistic virtual experiences. Yuan also works on building an international metaverse incubation and collaboration platform, providing access to knowledge and resources for metaverse development. His efforts have empowered a new generation of innovators and creators to push the boundaries of digital experiences—enabling a new era of immersive, interconnected, and intelligent technologies. Yuan has been an IEEE volunteer for many years. His service in IEEE standards activities at different levels (working groups, standards committees, and higher-level governance) has been widely appreciated by standards developers, individual members, and entity members around the world. As the current president of the IEEE Standards Association (IEEE SA), he plays a pivotal role in shaping global standards, fostering collaboration, and driving innovation in the technology sector. He believes that IEEE SA has the opportunity for significant growth and to become a stronger global influence. He is committed to encouraging, supporting, and protecting innovation in standards and the standards development process.Yuan is also a member of the IEEE Consumer Technology Society and a member-at-large on the society’s board of governors. From 2015-2020, he led the IEEE Consumer Technology Society Standards Committee to grow the society’s standards activities from zero to a top-level among IEEE technical societies and councils. The committee received the 2019 IEEE SA Standards Committee Award for exceptional leadership in entity-based standards development and industry engagement in consumer technology.
- Machine Learning Turns Up COVID Surpriseby Greg Uyeno on 3. Juna 2023. at 14:00
A hospital visit can be boiled down to an initial ailment and an outcome. But health records tell a different story, full of doctors’ notes and patient histories, vital signs and test results, potentially spanning weeks of a stay. In health studies, all of that data is multiplied by hundreds of patients. It’s no wonder, then, that as AI data processing techniques grow increasingly sophisticated, doctors are treating health as an AI and big-data problem. In one recent effort, researchers at Northwestern University have applied machine learning to electronic health records to produce a more granular, day-to-day analysis of pneumonia in an intensive care unit (ICU), where patients received assistance breathing from mechanical ventilators. The analysis, published 27 April in the Journal of Clinical Investigation, includes clustering of patient days by machine learning, which suggests that long-term respiratory failure and the risk of secondary infection are much more common in COVID-19 patients than the subject of much early COVID fears—cytokine storms. “Most methods that approach data analysis in the ICU look at data from patients when they’re admitted, then outcomes at some distant time point,” said Benjamin D. Singer, a study coauthor and associate professor at Northwestern’s Feinberg School of Medicine. “Everything in the middle is a black box.” The hope is that AI can make new clinical findings from daily ICU patient status data beyond the COVID-19 case study. The day-wise approach to the data led researchers to two related findings: Secondary respiratory infections are a common threat to ICU patients, including those with COVID-19; and a strong association between COVID-19 and respiratory failure, which can be interpreted as an unexpected lack of evidence for cytokine storms in COVID-19 patients. An eventual shift to multiple-organ failure might be expected if patients had an inflammatory cytokine response, which the researchers did not find. Reported rates vary, but cytokine storms have since the earliest days of the pandemic been considered a dangerous possibility in severe COVID-19 cases. Some 35 percent of patients were diagnosed with a secondary infection, also known as ventilator-associated pneumonia (VAP), at some point during their ICU stays. More than 57 percent of COVID-19 patients developed VAP, compared to 25 percent of non-COVID patients. Multiple VAP episodes were reported for almost 20 percent of COVID-19 patients. Catherine Gao, an instructor of medicine at Northwestern University and one of the study’s coauthors said the machine-learning algorithms they used helped the researchers “see clear patterns emerge that made clinical sense.” The team dubbed their day-focused machine learning approach CarpeDiem, after the Latin phrase meaning “seize the day.” CarpeDiem was built using the Jupyter Notebook platform, and the team has made both the code and deidentified data available. The data set included 44 different clinical parameters for each patient day, and the clustering approach returned 14 groups with different signatures of six types of organ dysfunction: respiratory, ventilator instability, inflammatory, renal, neurologic, and shock. “The field has focused on the idea that we can look at early data and see if that predicts how [patients] are going to do days, weeks, or months later,” said Singer. The hope, he said, is that research using daily ICU patient status rather than just a few time points can tell investigators—and the AI and machine-learning algorithms they use—more about the efficacy of different treatments or responses to changes in a patient’s condition. One future research direction would be to examine the momentum of illness, Singer said. The technique the researchers developed (which they called the “patient-day approach”) might catch other changes in clinical states with less time between data points, said Sayon Dutta, an emergency physician at Massachusetts General Hospital who helps develop predictive models for clinical practice using machine learning and was not involved in the study. Hourly data could present its own problems to a clustering approach, he said, making patterns difficult to recognize. “I think splitting the day up into 8-hour chunks instead might be a good compromise of granularity and dimensionality,” he said. Calls to incorporate new techniques to analyze the large amounts of ICU health data predate the COVID-19 pandemic. Machine learning or computational approaches more broadly could be used in the ICU in a variety of ways, not just in observational studies. Possible applications could use daily health records, as well as real-time data recorded by health care devices, or involve designing responsive machines that incorporate a range of available information. The overall mortality rates were around 40 percent in both patients who developed a secondary infection and those who did not. But among study patients with one diagnosed case of VAP, if their secondary pneumonia was not successfully treated within 14 days, 76.5 percent eventually died or were sent to hospice care. The rate was 17.6 percent among those whose secondary pneumonia was considered cured. Both groups included roughly 50 patients. Singer stresses that the risk of secondary pneumonia is typically a necessary one. “The ventilator is absolutely lifesaving in these instances. It’s up to us to figure out how to best manage complications that arise from it,” he said. “You have to be alive to experience a complication.”
- Video Friday: Autonomous Car Drifting, Aerial-Aquatic Drone, and Jet-Powered Robotby Erico Guizzo on 2. Juna 2023. at 16:12
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. This week, we’re featuring a special selection of videos from ICRA 2023! We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. Energy Drone & Robotics Summit: 10–12 June 2023, HOUSTON, TEXAS, USA RoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCE RSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREA IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREA IROS 2023: 1–5 October 2023, DETROIT, MICHIGAN, USA CLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZIL Humanoids 2023: 12–14 December 2023, AUSTIN, TEXAS, USA Enjoy today’s videos! “Autonomous Drifting With 3 Minutes of Data Via Learned Tire Models,” by Franck Djeumou, Jonathan Y.M. Goh, Ufuk Topcu, and Avinash Balachandran from University of Texas at Austin, and Toyota Research Institute, in Los Altos, Calif. Abstract: Near the limits of adhesion, the forces generated by a tire are nonlinear and intricately coupled. Efficient and accurate modeling in this region could improve safety, especially in emergency situations where high forces are required. To this end, we propose a novel family of tire force models based on neural ordinary differential equations and a neural-ExpTanh parameterization. These models are designed to satisfy physically insightful assumptions while also having sufficient fidelity to capture higher-order effects directly from vehicle state measurements. They are used as drop-in replacements for an analytical brush tire model in an existing nonlinear model predictive control framework. Experiments with a customized Toyota Supra show that scarce amounts of driving data—less than 3 minutes—is sufficient to achieve high-performance autonomous drifting on various trajectories with speeds up to 45 miles per hour. Comparisons with the benchmark model show a 4x improvement in tracking performance, smoother control inputs, and faster and more consistent computation time. “TJ-FlyingFish: Design and Implementation of an Aerial-Aquatic Quadrotor With Tiltable Propulsion Units,” by Xuchen Liu, Minghao Dou, Dongyue Huang, Songqun Gao, Ruixin Yan, Biao Wang, Jinqiang Cui, Qinyuan Ren, Lihua Dou, Zhi Gao, Jie Chen, and Ben M. Chen from Shanghai Research Institute for Intelligent Autonomous Systems, Tongji University, Shanghai, China; Chinese University of Hong Kong; Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu, China; Peng Cheng Laboratory, Shenzhen, Guangdong, China; Zhejiang University, Hangzhou, Zhejiang, China; Beijing Institute of Technology, in China; and Wuhan University, Wuhan, Hubei, China. Abstract: Aerial-aquatic vehicles are capable of moving in the two most dominant fluids, making them more promising for a wide range of applications. We propose a prototype with special designs for propulsion and thruster configuration to cope with the vast differences in the fluid properties of water and air. For propulsion, the operating range is switched for the different mediums by the dual-speed propulsion unit, providing sufficient thrust and also ensuring output efficiency. For thruster configuration, thrust vectoring is realized by the rotation of the propulsion unit around the mount arm, thus enhancing the underwater maneuverability. This paper presents a quadrotor prototype of this concept and the design details and realization in practice. “Towards Safe Landing of Falling Quadruped Robots Using a 3-DoF Morphable Inertial Tail,” by Yunxi Tang, Jiajun An, Xiangyu Chu, Shengzhi Wang, Ching Yan Wong, and K. W. Samuel Au from the Chinese University of Hong Kong and the Multiscale Medical Robotics Centre, in Hong Kong. Abstract: Falling cat problem is well-known where cats show their super aerial reorientation capability and can land safely. For their robotic counterparts, a similar falling quadruped robot problem has not been fully addressed, although achieving as safe a landing as the cats has been increasingly investigated. Unlike imposing the burden on landing control, we approach to safe landing of falling quadruped robots by effective flight phase control. Different from existing work like swinging legs and attaching reaction wheels or simple tails, we propose to deploy a 3-DoF morphable inertial tail on a medium-size quadruped robot. In the flight phase, the tail with its maximum length can self-right the body orientation in 3D effectively; before touchdown, the tail length can be retracted to about 1/4 of its maximum for impressing the tail’s side effect on landing. To enable aerial reorientation for safe landing in the quadruped robots, we design a control architecture that is verified in a high-fidelity physics simulation environment with different initial conditions. Experimental results on a customized flight-phase test platform with comparable inertial properties are provided and show the tail’s effectiveness on 3D body reorientation and its fast retractability before touchdown. An initial falling quadruped robot experiment is shown, where the robot Unitree A1 with the 3-DoF tail can land safely subject to non-negligible initial body angles. “Nonlinear Model Predictive Control of a 3D Hopping Robot: Leveraging Lie Group Integrators for Dynamically Stable Behaviors,” by Noel Csomay-Shanklin, Victor D. Dorobantu, and Aaron D. Ames from Caltech, in Pasadena, Calif. Abstract: Achieving stable hopping has been a hallmark challenge in the field of dynamic legged locomotion. Controlled hopping is notably difficult due to extended periods of underactuation combined with very short ground phases wherein ground interactions must be modulated to regulate a global state. In this work, we explore the use of hybrid nonlinear model predictive control paired with a low-level feedback controller in a multirate hierarchy to achieve dynamically stable motions on a novel 3D hopping robot. In order to demonstrate richer behaviors on the manifold of rotations, both the planning and feedback layers must be designed in a geometrically consistent fashion; therefore, we develop the necessary tools to employ Lie group integrators and appropriate feedback controllers. We experimentally demonstrate stable 3D hopping on a novel robot, as well as trajectory tracking and flipping in simulation. “Fast Untethered Soft Robotic Crawler with Elastic Instability,” by Zechen Xiong, Yufeng Su, and Hod Lipson from Columbia University, New York, N.Y. Abstract: Enlightened by the fast-running gait of mammals like cheetahs and wolves, we design and fabricate a single-actuated untethered compliant robot that is capable of galloping at a speed of 313 millimeters per second or 1.56 body lengths per second (BL/s), faster than most reported soft crawlers in mm/s and BL/s. An in-plane prestressed hair clip mechanism (HCM) made up of semirigid materials, i.e., plastics are used as the supporting chassis, the compliant spine, and the force amplifier of the robot at the same time, enabling the robot to be simple, rapid, and strong. With experiments, we find that the HCM robotic locomotion speed is linearly related to actuation frequencies and substrate friction differences except for concrete surfaces, that tethering slows down the crawler, and that asymmetric actuation creates a new galloping gait. This paper demonstrates the potential of HCM-based soft robots. “Nature Inspired Machine Intelligence from Animals to Robots,” by Thirawat Chuthong, Wasuthorn Ausrivong, Binggwong Leung, Jettanan Homchanthanakul, Nopparada Mingchinda, and Poramate Manoonpong from Vidyasirimedhi Institute of Science and Technology (VISTEC), Thailand, and the Maersk Mc-Kinney Moller Institute, University of Southern Denmark. Abstract: In nature, living creatures show versatile behaviors. They can move on various terrains and perform impressive object manipulation/transportation using their legs. Inspired by their morphologies and control strategies, we have developed bioinspired robots and adaptive modular neural control. In this video, we demonstrate our five bioinspired robots in our robot zoo setup. Inchworm-inspired robots with two electromagnetic feet (Freelander-02 and AVIS) can adaptively crawl and balance on horizontal and vertical metal pipes. With special design, the Freelander-02 robot can adapt its posture to crawl underneath an obstacle, while the AVIS robot can step over a flange. A millipede-inspired robot with multiple body segments (Freelander-08) can proactively adapt its body joints to efficiently navigate on bump terrain. A dung beetle–inspired robot (ALPHA) can transport an object by grasping it with its hind legs and at the same time walk backward with the remaining legs like dung beetles. Finally, an insect-inspired robot (MORF), which is a hexapod robot platform, demonstrates typical insectlike gaits (slow wave and fast tripod gaits). In a nutshell, we believe that this bioinspired robot zoo demonstrates how the diverse and fascinating abilities of living creatures can serve as inspiration and principles for developing robotics technology capable of achieving multiple robotic functions and solving complex motor control problems in systems with many degrees of freedom. “AngGo: Shared Indoor Smart Mobility Device,” by Yoon Joung Kwak, Haeun Park, Donghun Kang, Byounghern Kim, Jiyeon Lee, and Hui Sung Lee from Ulsan National Institute of Science and Technology (UNIST), in Ulsan, South Korea. Abstract: AngGo is a hands-free shared indoor smart mobility device for public use. AngGo is a personal mobility device that is suitable for the movement of passengers in huge indoor spaces such as convention centers or airports. The user can use both hands freely while riding the AngGo. Unlike existing mobility devices, the mobility device can be maneuvered using the feet and was designed to be as intuitive as possible. The word “AngGo” is pronounced like a Korean word meaning “sit down and move.” There are 6 ToF distance sensors around AngGo. Half of them are in the front part and the other half are in the rear part. In the autonomous mode, AngGo avoids obstacles based on the distance from each sensor. IR distance sensors are mounted under the footrest to measure the extent to which the footrest is moved forward or backward, and these data are used to control the rotational speed of motors. The user can control the speed and the direction of AngGo simultaneously. The spring in the footrest generates force feedback, so the user can recognize the amount of variation. “Creative Robotic Pen-Art System,” by Daeun Song and Young Jun Kim from Ewha Womans University in Seoul, South Korea. Abstract: Since the Renaissance, artists have created artworks using novel techniques and machines, deviating from conventional methods. The robotic drawing system is one of such creative attempts that involves not only the artistic nature but also scientific problems that need to be solved. Robotic drawing problems can be viewed as planning the robot’s drawing path that eventually leads to the art form. The robotic pen-art system imposes new challenges, unlike robotic painting, requiring the robot to maintain stable contact with the target drawing surface. This video showcases an autonomous robotic system that creates pen art on an arbitrary canvas surface without restricting its size or shape. Our system converts raster or vector images into piecewise-continuous paths depending on stylistic choices, such as TSP art or stroke-based drawing. Our system consists of multiple manipulators with mobility and performs stylistic drawing tasks. In order to create a more extensive pen art, the mobile manipulator setup finds a minimal number of discrete configurations for the mobile platform to cover the ample canvas space. The dual manipulator setup can generate multicolor pen art using adaptive three-finger grippers with a pen-tool-change mechanism. We demonstrate that our system can create visually pleasing and complicated pen art on various surfaces. “I Know What You Want: A ‘Smart Bartender’ System by Interactive Gaze Following,” by Haitao Lin, Zhida Ge, Xiang Li, Yanwei Fu, and Xiangyang Xue from Fudan University, in Shanghai, China. Abstract: We developed a novel “Smart Bartender” system, which can understand the intention of users just from the eye gaze and make some corresponding actions. Particularly, we believe that a cyber-barman who cannot feel our faces is not an intelligent one. We thus aim at building a novel cyber-barman by capturing and analyzing the intention of the customers on the fly. Technically, such a system enables the user to select a drink simply by staring at it. Then the robotic arm mounted with a camera will automatically grasp the target bottle and pour the liquid into the cup. To achieve this goal, we firstly adopt YOLO to detect candidate drinks. Then, the GazeNet is utilized to generate potential gaze center for grounding the target bottle that has minimum center-to-center distance. Finally, we use object pose estimation and path-planning algorithms to guide the robotic arm to grasp the target bottle and execute pouring. Our system integrated with the category-level object pose estimation enjoys powerful performance, generalizing to various unseen bottles and cups that are not used for training. We believe our system would not only reduce the intensive human labor in different service scenarios but also provide users with interactivity and enjoyment. “Towards Aerial Humanoid Robotics: Developing the Jet-Powered Robot iRonCub,” by Daniele Pucci, Gabriele Nava, Fabio Bergonti, Fabio Di Natale, Antonello Paolino, Giuseppe L’erario, Affaf Junaid Ahamad Momin, Hosameldin Awadalla Omer Mohamed, Punith Reddy Vanteddu, and Francesca Bruzzone from the Italian Institute of Technology (IIT), in Genoa, Italy. Abstract: The current state of robotics technology lacks a platform that can combine manipulation, aerial locomotion, and bipedal terrestrial locomotion. Therefore, we define aerial humanoid robotics as the outcome of platforms with these three capabilities. To implement aerial humanoid robotics on the humanoid robot iCub, we conduct research in different directions. This includes experimental research on jet turbines and codesign, which is necessary to implement aerial humanoid robotics on the real iCub. These activities aim to model and identify the jet turbines. We also investigate flight control of flying humanoid robots using Lyapunov-quadratic-programming-based control algorithms to regulate both the attitude and position of the robot. These algorithms work independently of the number of jet turbines installed on the robot and ensure satisfaction of physical constraints associated with the jet engines. In addition, we research computational fluid dynamics for aerodynamics modeling. Since the aerodynamics of a multibody system like a flying humanoid robot is complex, we use CFD simulations with Ansys to extract a simplified model for control design, as there is little space for closed-form expressions of aerodynamic effects. “AMEA Autonomous Electrically Operated One-Axle Mowing Robot,” by Romano Hauser, Matthias Scholer, and Katrin Solveig Lohan from Eastern Switzerland University of Applied Sciences (OST), in St. Gallen, Switzerland, and Heriot-Watt University, in Edinburgh, Scotland. Abstract: The goal of this research project (Consortium: Altatek GmbH, Eastern Switzerland University of Applied Sciences OST, Faculty of Law University of Zurich) was the development of a multifunctional, autonomous single-axle robot with an electric drive. The robot is customized for agricultural applications in mountainous areas with steepest slopes. The intention is to relieve farmers from arduous and safety-critical work. Furthermore, the robot is developed as a modular platform that can be used for work in forestry, municipal, sports fields, and winter/snow applications. Robot features: Core feature is the patented center of gravity control. With a sliding wheel axle of 800 millimeters, hills up to a steepness of 35 degrees (70 percent) can be easily driven and a safe operation without tipping can be ensured. To make the robot more sustainable, electric drives and a 48-volt battery were equipped. To navigate in mountainous areas, several sensors are used. In difference to applications on flat areas, the position and gradient of the robot on the slope needs to be measured and considered in the path planning. A sensor system that detects possible obstacles and especially humans or animals which could be in the path of the robot is currently under development. “Surf Zone Exploration With Crab-Like Legged Robots,” by Yifeng Gong, John Grezmak, Jianfeng Zhou, Nicole Graf, Zhili Gong, Nathan Carmichael, Airel Foss, Glenna Clifton, and Kathryn A. Daltorio from Case Western Reserve University, in Cleveland, and the University of Portland, in Oregon. Abstract: Surf zones are challenging for walking robots if they cannot anchor to the substrate, especially at the transition between dry sand and waves. Crablike dactyl designs enable robots to achieve this anchoring behavior while still being lightweight enough to walk on dry sand. Our group has been developing a series of crablike robots to achieve the transition from walking on underwater surfaces to walking on dry land. Compared with the default forward-moving gait, we find that inward-pulling gaits and sideways walking increase efficiency in granular media. By using soft dactyls, robots can probe the ground to classify substrates, which can help modify gaits to better suit the environment and recognize hazardous conditions. Dactyls can also be used to securely grasp the object and dig in the substrate for installing cables, searching for buried objects, and collecting sediment samples. To simplify control and actuation, we developed a four-degrees-of-freedom Klann mechanism robot, which can climb onto an object and then grasp it. In addition, human interfaces will improve our ability to precisely control the robot for these types of tasks. In particular, the U.S. government has identified munitions retrieval as an environmental priority through their Strategic Environmental Research Development Program. Our goal is to support these efforts with new robots. “Learning Exploration Strategies to Solve Real-World Marble Runs,” by Alisa Allaire and Christopher G. Atkeson from the Robotics Institute at Carnegie Mellon University, in Pittsburgh. Abstract: Tasks involving locally unstable or discontinuous dynamics (such as bifurcations and collisions) remain challenging in robotics, because small variations in the environment can have a significant impact on task outcomes. For such tasks, learning a robust deterministic policy is difficult. We focus on structuring exploration with multiple stochastic policies based on a mixture of experts (MoE) policy representation that can be efficiently adapted. The MoE policy is composed of stochastic subpolicies that allow exploration of multiple distinct regions of the action space (or strategies) and a high- level selection policy to guide exploration toward the most promising regions. We develop a robot system to evaluate our approach in a real-world physical problem-solving domain. After training the MoE policy in simulation, online learning in the real world demonstrates efficient adaptation within just a few dozen attempts, with a minimal sim2real gap. Our results confirm that representing multiple strategies promotes efficient adaptation in new environments and strategies learned under different dynamics can still provide useful information about where to look for good strategies. “Flipbot: Learning Continuous Paper-Flipping Via Coarse-to-Fine Exteroceptive-Proprioceptive Exploration,” by Chao Zhao, Chunli Jiang, Junhao Cai, Michael Yu Wang, Hongyu Yu, and Qifeng Chen from Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong, and HKUST-Shenzhen-Hong Kong Collaborative Innovation Research Institute, Futian, Shenzhen. Abstract: This paper tackles the task of singulating and grasping paperlike deformable objects. We refer to such tasks as paper-flipping. In contrast to manipulating deformable objects that lack compression strength (such as shirts and ropes), minor variations in the physical properties of the paperlike deformable objects significantly impact the results, making manipulation highly challenging. Here, we present Flipbot, a novel solution for flipping paperlike deformable objects. Flipbot allows the robot to capture object physical properties by integrating exteroceptive and proprioceptive perceptions that are indispensable for manipulating deformable objects. Furthermore, by incorporating a proposed coarse-to-fine exploration process, the system is capable of learning the optimal control parameters for effective paper-flipping through proprioceptive and exteroceptive inputs. We deploy our method on a real-world robot with a soft gripper and learn in a self-supervised manner. The resulting policy demonstrates the effectiveness of Flipbot on paper-flipping tasks with various settings beyond the reach of prior studies, including but not limited to flipping pages throughout a book and emptying paper sheets in a box. The code is available here: https://robotll.github.io/Flipbot/ “Croche-Matic: A Robot for Crocheting 3D Cylindrical Geometry,” by Gabriella Perry, Jose Luis Garcia del Castillo y Lopez, and Nathan Melenbrink from Harvard University, in Cambridge, Mass. Abstract: Crochet is a textile craft that has resisted mechanization and industrialization except for a select number of one-off crochet machines. These machines are only capable of producing a limited subset of common crochet stitches. Crochet machines are not used in the textile industry, yet mass-produced crochet objects and clothes sold in stores like Target and Zara are almost certainly the products of crochet sweatshops. The popularity of crochet and the existence of crochet products in major chain stores shows that there is both a clear demand for this craft as well as a need for it to be produced in a more ethical way. In this paper, we present Croche-Matic, a radial crochet machine for generating three-dimensional cylindrical geometry. The Croche-Matic is designed based on Magic Ring technique, a method for hand-crocheting 3D cylindrical objects. The machine consists of nine mechanical axes that work in sequence to complete different types of crochet stitches, and includes a sensor component for measuring and regulating yarn tension within the mechanical system. Croche-Matic can complete the four main stitches used in Magic Ring technique. It has a success rate of 50.7 percent with single crochet stitches, and has demonstrated an ability to create three-dimensional objects. “SOPHIE: SOft and Flexible Aerial Vehicle for PHysical Interaction with the Environment,” by F. Ruiz , B. C. Arrue, and A. Ollero from GRVC Robotics Lab of Seville, Spain. Abstract: This letter presents the first design of a soft and lightweight UAV, entirely 3D-printed in flexible filament. The drone’s flexible arms are equipped with a tendon-actuated bending system, which is used for applications that require physical interaction with the environment. The flexibility of the UAV can be controlled during the additive manufacturing process by adjusting the infill rate ρTPU distribution. However, the increase inflexibility implies difficulties in controlling the UAV, as well as structural, aerodynamic, and aeroelastic effects. This article provides insight into the dynamics of the system and validates the flyability of the vehicle for densities as low as 6 percent. Within this range, quasi-static arm deformations can be considered; thus the autopilot is fed back through a static arm deflection model. At lower densities, strong nonlinear elastic dynamics appear, which translates to complex modeling, and it is suggested to switch to data-based approaches. Moreover, this work demonstrates the ability of the soft UAV to perform full-body perching, specifically landing and stabilizing on pipelines and irregular surfaces without the need for an auxiliary system. “Reconfigurable Drone System for Transportation of Parcels with Variable Mass and Size,” by Fabrizio Schiano, Przemyslaw Mariusz Kornatowski, Leonardo Cencetti, and Dario Floreano from École Polytechnique Fédérale de Lausanne (EPFL), in Switzerland, and Leonardo S.p.A., Leonardo Labs, in Rome. Abstract: Cargo drones are designed to carry payloads with predefined shape, size, and/or mass. This lack of flexibility requires a fleet of diverse drones tailored to specific cargo dimensions. Here we propose a new reconfigurable drone based on a modular design that adapts to different cargo shapes, sizes, and mass. We also propose a method for the automatic generation of drone configurations and suitable parameters for the flight controller. The parcel becomes the drone’s body to which several individual propulsion modules are attached. We demonstrate the use of the reconfigurable hardware and the accompanying software by transporting parcels of different mass and sizes requiring various numbers and propulsion modules’ positioning. The experiments are conducted indoors (with a motion-capture system) and outdoors (with an RTK-GNSS sensor). The proposed design represents a cheaper and more versatile alternative to the solutions involving several drones for parcel transportation.
- MIT Multirobot Mapping Sets New “Gold Standard”by Evan Ackerman on 2. Juna 2023. at 10:30
This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore. Does your robot know where it is right now? Does it? Are you sure? And what about all of its robot friends—do they know where they are too? This is important. So important, in fact, that some would say that multirobot simultaneous localization and mapping (SLAM) is a crucial capability to obtain timely situational awareness over large areas. Those some would be a group of MIT roboticists who just won the IEEE Transactions on Robotics Best Paper Award for 2022, presented at this year’s IEEE International Conference on Robotics and Automation (ICRA 2023), in London. Congratulations! Out of more than 200 papers published in Transactions on Robotics last year, reviewers and editors voted to present the 2022 IEEE Transactions on Robotics King-Sun Fu Memorial Best Paper Award to Yulun Tian, Yun Chang, Fernando Herrera Arias, Carlos Nieto-Granda, Jonathan P. How, and Luca Carlone from MIT for their paper Kimera-Multi: Robust, Distributed, Dense Metric-Semantic SLAM for Multi-Robot Systems. “The editorial board, and the reviewers, were deeply impressed by the theoretical elegance and practical relevance of this paper and the open-source code that accompanies it. Kimera-Multi is now the gold standard for distributed multirobot SLAM.” —Kevin Lynch, editor in chief, IEEE Transactions on Robotics Robots rely on simultaneous localization and mapping to understand where they are in unknown environments. But unknown environments are a big place, and it takes more than one robot to explore all of them. If you send a whole team of robots, each of them can explore their own little bit, and then share what they’ve learned with one another to make a much bigger map that they can all take advantage of. Like most things robot, this is much easier said than done, which is why Kimera-Multi is so useful and important. The award-winning researchers say that Kimera-Multi is a distributed system that runs locally on a bunch of robots all at once. If one robot finds itself in communications range with another robot, they can share map data, and use those data to build and improve a globally consistent map that includes semantic annotations. Since filming the above video, the researchers have done real-world tests with Kimera-Multi. Below is an example of the map generated by three robots as they travel a total of more than 2 kilometers. You can easily see how the accuracy of the map improves significantly as the robots talk to each other: More details and code are available on GitHub. Transactions on Robotics also selected some excellent Honorable Mentions for 2022: Stabilization of Complementarity Systems via Contact-Aware Controllers, by Alp Aydinoglu, Philip Sieg, Victor M. Preciado, and Michael Posa Autonomous Cave Surveying With an Aerial Robot, by Wennie Tabib, Kshitij Goel, John Yao, Curtis Boirum, and Nathan Michael Prehensile Manipulation Planning: Modeling, Algorithms and Implementation, by Florent Lamiraux and Joseph Mirabel Rock-and-Walk Manipulation: Object Locomotion by Passive Rolling Dynamics and Periodic Active Control, by Abdullah Nazir, Pu Xu, and Jungwon Seo Origami-Inspired Soft Actuators for Stimulus Perception and Crawling Robot Applications, by Tao Jin, Long Li, Tianhong Wang, Guopeng Wang, Jianguo Cai, Yingzhong Tian, and Quan Zhang
- IEEE President’s Note: Connecting the Unconnectedby Saifur Rahman on 1. Juna 2023. at 18:00
At IEEE, we know that the advancement of science and technology is the engine that drives the improvement of the quality of life for every person on this planet. Unfortunately, as we are all aware, today’s world faces significant challenges, including escalating conflicts, a climate crisis, food insecurity, gender inequality, and the approximately 2.7 billion people who cannot access the Internet. Bridging the divide The COVID-19 pandemic exposed the digital divide like never before. The world saw the need for universal broadband connectivity for remote work, online education, telemedicine, entertainment, and social networking. Those who had access thrived while those without it struggled. As millions of classrooms moved online, the lack of connectivity made it difficult for some students to participate in remote learning. Adults who could not perform their job virtually faced layoffs or reduced work hours. The pandemic also exposed weaknesses in the global infrastructure that supports the citizens of the world. It became even more apparent that vital communications, computing, energy, and distribution infrastructure was not always equitably distributed, particularly in less developed regions. 2023 IEEE President’s Award I had the pleasure of presenting the 2023 IEEE President’s Award to Doreen Bogdan-Martin, secretary-general of the International Telecommunication Union, on 28 March, at ITU’s headquarters in Geneva. The award recognizes her distinguished leadership at the agency and her notable contributions to the global public. It is my honor to recognize such a transformational leader and IEEE member for her demonstrated commitment to bridging the digital divide and to ensuring connectivity that is safe, inclusive, and affordable to all. Nearly 45 percent of global households do not have access to the Internet, according to UNESCO. A report from UNICEF estimates that nearly two-thirds of the world’s schoolchildren lack Internet access at home. This digital divide is particularly impactful on women. who are 23 percent less likely than men to use the Internet. According to the United Nations Educational, Scientific and Cultural Organization, in 10 countries across Africa, Asia, and South America, women are between 30 percent and 50 percent less likely than men to make use of the Internet. Even in developed countries, Internet access is often lower than one might imagine. More than six percent of the U.S. population does not have a high-speed connection. In Australia, the figure is 13 percent. Globally, just over half of households have an Internet connection, according to UNESCO. In the developed world, 87 percent are connected, compared with 47 percent in developing nations and just 19 percent in the least developed countries. Benefits of technology As IEEE looks to lead the development of technology to tackle climate change and empower universal prosperity, it is essential that we recognize the role that meaningful connectivity and digital technology play in the organization’s goals to support global sustainability, drive economic growth, and transform health care, education, employment, gender equality, and youth empowerment. IEEE members around the globe are continuously developing and applying technology to help solve these problems. It is that universal passion—to improve global conditions—that is at the heart of our mission, as well as our expanding partnerships and significant activities supporting the achievement of the U.N. Sustainable Development Goals. One growing partnership is with the International Telecommunication Union, a U.N. specialized agency that helps set policy related to information and communication technologies. IEEE Member Doreen Bogdan-Martin was elected as ITU secretary-general and took office on 1 January, becoming the first woman to lead the 155-year-old organization. Bogdan-Martin is the recipient of this year’s IEEE President’s Award [see sidebar]. IEEE and ITU share the goal of bringing the benefits of technology to all of humanity. I look forward to working closely with the U.N. agency to promote meaningful connectivity, intensify cooperation to connect the unconnected, and strengthen the alignment of digital technologies with inclusive sustainable development. I truly believe that one of the most important applications of technology is to improve people’s lives. For those in underserved regions of the world, technology can improve educational opportunities, provide better health care, alleviate suffering, and maintain human dignity. Technology and technologists, particularly IEEE members, have a significant role to play in shaping life on this planet. They can use their skills to develop and advance technology—from green energy to reducing waste and emissions, and from transportation electrification to digital education, health, and agriculture. As a person who believes in the power of technology to benefit humanity, I find this to be a very compelling vision for our shared future. Please share your thoughts with me: firstname.lastname@example.org. —SAIFUR RAHMAN IEEE president and CEO This article appears in the June 2023 print issue as “Connecting the Unconnected.”
- Cybersecurity Gaps Could Put Astronauts at Grave Riskby Sarah Wells on 31. Maja 2023. at 14:53
This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore. On 3 July 1996, Earth was facing all but absolute destruction from an alien force hovering above three of the world’s biggest cities. Hope of humanity’s survival dwindled after brute force failed to thwart the attackers. But a small piece of malicious computer code changed the course of history when it was uploaded to the aliens’ computer system the next day. The malware—spoiler alert—disabled the invading ships’ defenses and ultimately saved the fate of humanity. At least, that’s what happened in the wildly speculative 1996 sci-fi film Independence Day. Yet, for all the reality-defying situations the blockbuster depicted, the prospective reality of a malware attack wreaking havoc on a future crewed spacecraft mission has digital-security experts very concerned. Gregory Falco, an assistant professor of civil and systems engineering at Johns Hopkins, explored the topic in a recent paper presented at the spring 2023 IEEE Aerospace Conference. Inspiration for the study, he says, came from his discovering a relative lack of cybersecurity features in the Artemis crew’s next-generation spacesuits. “Maybe you might think about securing the communications link to your satellite, but the stuff in space all trusts the rest of stuff in space.”—James Pavur, cybersecurity engineer “The reality was that there was zero specification when they had their call for proposals [for new spacesuit designs] that had anything to do with cyber[security],” Falco says. “That was frustrating for me to see. This paper was not supposed to be groundbreaking.... It was supposed to be kind of a call to say, ‘Hey, this is a problem.’ ” As human spaceflight prepares to enter a new, modern era with NASA’s Artemis program, China’s Tiangong Space Station, and a growing number of fledgling space-tourism companies, cybersecurity is at least as much of a persistent problem up there as it is down here. Its magnitude is only heightened by the fact that maliciously driven system failures—in the cold, unforgiving vacuum of space—can escalate to life or death with just a few inopportune missteps. Apollo-era and even Space Shuttle–era approaches to cybersecurity are overdue for an update, Falco says. “Security by obscurity” no longer works When the United States and other space-faring nations, such as the then–Soviet Union, began to send humans to space in the late 1960s, there was little to fear in the way of cybersecurity risks. Not only did massively interconnected systems like the internet not yet exist, but technology aboard these craft was so bespoke that it protected itself through a “security by obscurity” approach. This meant that the technology was so complex that it effectively kept itself safe from tampering, says James Pavur, a cybersecurity researcher and lead cybersecurity software engineer at software company Istari Global. A consequence of this security approach is that once you do manage to enter the craft’s internal systems—whether you’re a crew member or perhaps in years to come a space tourist—you’ll be granted full access to the online systems with essentially zero questions asked. This security approach is not only insecure, says Pavur, but it is also vastly different from the zero-trust approach applied to many terrestrial technologies. “Cybersecurity has been something that kind of stops on the ground,” he says. “Like maybe you might think about securing the communications link to your satellite, but the stuff in space all trusts the rest of stuff in space.” NASA is no stranger to cybersecurity attacks on its terrestrial systems—nearly 2,000 “cyber incidents” were made in 2020 according to a 2021 NASA report. But the types of threats that could target crewed spacecraft missions would be much different from phishing emails, says Falco. What are the cyberthreats in outer space? Cyberthreats to crewed spacecraft may focus on proximity approaches, such as installing malware or ransomware into a craft’s internal computer. In his paper, Falco and coauthor Nathaniel Gordon lay out four ways that crew members, including space tourists, may be used as part of these threats: crew as the attacker, crew as an attack vector, crew as collateral damage, and crew as the target. “It’s almost akin to medical-device security or things of that nature rather than opening email,” Falco says. “You don’t have the same kind of threats as you would have for an IT network.” Among a host of troubling scenarios, proprietary secrets—both private and national—could be stolen, the crew could be put at risk as part of a ransomware attack, or crew members could even be deliberately targeted through an attack on safety-critical systems like air filters. All of these types of attacks have taken place on Earth, say Falco and Gordon in their paper. But the high level of publicity of the work as well as the integrated nature of spacecraft—close physical and network proximity of systems within a mission—could make cyberattack on spacecraft particularly appealing. Again heightening the stakes, the harsh environment of outer (or lunar or planetary) space renders malicious cyberthreats that much more perilous for crew members. To date, deadly threats like these have gratefully not affected human spaceflight. Though if science fiction provides any over-the-horizon warning system for the shape of threats to come, consider sci-fi classics like 2001: A Space Odyssey or Alien—in which a nonhuman crew member is able to control the crafts’ computers in order to change the ship’s route and to even prevent a crew member from leaving the ship in an escape pod. Right now, say Falco and Gordon, there is little to keep a bad actor or a manipulated crew member onboard a spacecraft from doing something similar. Luckily, the growing presence of humans in space also provides an opportunity to create meaningful hardware, software, and policy changes surrounding the cybersecurity of these missions. Saadia Pekkanen is the founding director of the University of Washington’s Space Law, Data and Policy Program. In order to create a fertile environment for these innovations, she says, it will be important for space-dominant countries like the United States and China to create new policies and legislation to dictate how to address their own nations’ cybersecurity risk. While these changes won’t directly affect international policy, decisions made by these countries could steer how other countries address these problems as well. “We’re hopeful that there continues to be dialogue at the international level, but a lot of the regulatory action is actually going to come, we think, at the national level,” Pekkanen says. How can the problem be fixed? Hope for a solution, Pavur says, could begin with the fact that another sector in aerospace—the satellite industry—has made recent strides toward greater and more robust cybersecurity of their telemetry and communications (as outlined in a 2019 review paper published in the journal IEEE Aerospace and Electronic Systems). Falco points toward relevant terrestrial cybersecurity standards—including the zero-trust protocol—that require users to prove their identity to access the systems that keep safety-critical operations separate from all other onboard tasks. Creating a security environment that’s more supportive of ethical hackers—the kind of hackers who break things to find security flaws in order to fix them instead of exploit them—would provide another crucial step forward, Pavur says. However, he adds, this might be easier said than done. “That’s very uncomfortable for the aerospace industry because it’s just not really how they historically thought about threat and risk management,” he says. “But I think it can be really transformative for companies and governments that are willing to take that risk.” Falco also notes that space tourism flights could benefit from a spacefaring equivalent of the TSA—to ensure that malware isn’t being smuggled onboard in a passenger’s digital devices. But perhaps most important, instead of “cutting and pasting” imperfect terrestrial solutions into space, Falco says that now is the time to reinvent how the world secures critical cyber infrastructure in Earth orbit and beyond. “We should use this opportunity to come up with new or different paradigms for how we handle security of physical systems,” he says. “It’s a white space. Taking things that are half-assed and don’t work perfectly to begin with and popping them into this domain is not going to really serve anyone the way we need.”
- AI Everywhere, All at Onceby Harry Goldstein on 31. Maja 2023. at 13:35
It’s been a frenetic six months since OpenAI introduced its large language model ChatGPT to the world at the end of last year. Every day since then, I’ve had at least one conversation about the consequences of the global AI experiment we find ourselves conducting. We aren’t ready for this, and by we, I mean everyone–individuals, institutions, governments, and even the corporations deploying the technology today. The sentiment that we’re moving too fast for our own good is reflected in an open letter calling for a pause in AI research, which was posted by the Future of Life Institute and signed by many AI luminaries, including some prominent IEEE members. As News Manager Margo Anderson reports online in The Institute, signatories include Senior Member and IEEE’s AI Ethics Maestro Eleanor “Nell” Watson and IEEE Fellow and chief scientist of software engineering at IBM, Grady Booch. He told Anderson, “These models are being unleashed into the wild by corporations who offer no transparency as to their corpus, their architecture, their guardrails, or the policies for handling data from users. My experience and my professional ethics tell me I must take a stand….” Explore IEEE AI ethics and governance programs IEEE CAI 2023 Conference on Artificial Intelligence, June 5-6, Santa Clara, Calif. AI GET Program for AI Ethics and Governance Standards IEEE P2863 Organizational Governance of Artificial Intelligence Working Group IEEE Awareness Module on AI Ethics IEEE CertifAIEd Recent Advances in the Assessment and Certification of AI Ethics But research and deployment haven’t paused, and AI is becoming essential across a range of domains. For instance, Google has applied deep-reinforcement learning to optimize placement of logic and memory on chips, as Senior Editor Samuel K. Moore reports in the June issue’s lead news story “Ending an Ugly Chapter in Chip Design.” Deep in the June feature well, the cofounders of KoBold Metals explain how they use machine-learning models to search for minerals needed for electric-vehicle batteries in “This AI Hunts for Hidden Hoards of Battery Minerals.” Somewhere between the proposed pause and headlong adoption of AI lie the social, economic, and political challenges of creating the regulations that tech CEOs like OpenAI’s Sam Altman and Google’s Sundar Pichai have asked governments to create. “These models are being unleashed into the wild by corporations who offer no transparency as to their corpus, their architecture, their guardrails, or the policies for handling data from users.” To help make sense of the current AI moment, I talked with IEEE Spectrum senior editor Eliza Strickland, who recently won a Jesse H. Neal Award for best range of work by an author for her biomedical, geoengineering, and AI coverage. Trustworthiness, we agreed, is probably the most pressing near-term concern. Addressing the provenance of information and its traceability is key. Otherwise people may be swamped by so much bad information that the fragile consensus among humans about what is and isn’t real totally breaks down. The European Union is ahead of the rest of the world with its proposed Artificial Intelligence Act. It assigns AI applications to three risk categories: Those that create unacceptable risk would be banned, high-risk applications would be tightly regulated, and applications deemed to pose few if any risks would be left unregulated. The EU’s draft AI Act touches on traceability and deepfakes, but it doesn’t specifically address generative AI–deep-learning models that can produce high-quality text, images, or other content based on its training data. However, a recent article in The New Yorker by the computer scientist Jaron Lanier directly takes on provenance and traceability in generative AI systems. Lanier views generative AI as a social collaboration that mashes up work done by humans. He has helped develop a concept dubbed “data dignity,” which loosely translates to labeling these systems’ products as machine generated based on data sources that can be traced back to humans, who should be credited with their contributions. “In some versions of the idea,” Lanier writes, “people could get paid for what they create, even when it is filtered and recombined through big models, and tech hubs would earn fees for facilitating things that people want to do.” That’s an idea worth exploring right now. Unfortunately, we can’t prompt ChatGPT to spit out a global regulatory regime to guide how we should integrate AI into our lives. Regulations ultimately apply to the humans currently in charge, and only we can ensure a safe and prosperous future for people and our machines.
- Advice and Resources for Tech Workers Coping With Job Lossby Kathy Pretz on 30. Maja 2023. at 19:43
Tens of thousands of tech workers have been laid off by companies recently, including at Amazon, Dropbox, GitHub, Google, Microsoft, and Vimeo. Startups, too, have made cuts, according to TechCrunch. To help IEEE members cope with losing a job, The Institute asked Chenyang Xu for advice. The IEEE Fellow is president and cochairman of Perception Vision Medical Technologies, known as PVmed. The global startup, which is involved with AI-powered precision radiotherapy and surgery for treating cancer, is headquartered in Guangzhou, China. Xu was formerly general manager of the Siemens Technology to Business North America. In past articles, Xu has provided guidance for startups, such as steps they can take to ensure success, where founders can find financing, and how to be a global entrepreneur. Included with his advice are ways IEEE can help. Beef up your tech and leadership skills with online courses Although Xu isn’t a financial advisor, he says the first thing to do when you lose your job is to “slim down financially.” Do what it takes to make sure you have enough money to support yourself and your family until you land your next job, he says. “Don’t assume you’ll find a job right away,” he cautions. “You might not find one for six months, and by then you could become bankrupt.” To help unemployed members keep costs down, IEEE offers a reduced-dues program. For those who have lost their insurance coverage, the organization offers group insurance plans. After attending to your finances, Xu says, the next step is to reflect on your career. “Being laid off gives you some breathing room,” he says. “When you were working, you had no choice in what kind of work you had to do. But now that you’re laid off, you need to think about your career in 5 to 10 years. You now have experience and know what you like to do and what you don’t.” Ask yourself what makes you fulfilled, he says, as well as what makes you happy and what makes you feel valued. Then, he says, start looking for jobs that check all or some of the boxes. “Now that you’re laid off, you need to think about your career in 5 to 10 years. You now have experience and know what you like to do and what you don’t.” Once you’ve figured out what your long-range career plan is, you most likely will need to learn new skills, Xu says. If you’ve decided to change fields, you’ll need to learn even more. IEEE offers online courses that cover 16 subjects. There are classes, for example, on aerospace, computing, power and energy, and transportation. The emerging technologies course offerings cover artificial reality, blockchain technology, virtual reality, and more. Several leadership courses can teach you how to manage people. They include An Introduction to Leadership, Communication and Presentation Skills, and Technical Writing for Scientists and Engineers. Help with finding jobs and consulting gigs Looking for a new position? The IEEE Job Site lists hundreds of openings. Job seekers can upload their résumé and set up an alert to be notified of jobs matching their criteria. The site’s career-planning portal offers services such as interview tips and help with writing résumés and cover letters. IEEE-USA offers several on-demand job-search webinars. They cover topics such as how to find the right job, résumé trends, and healthy financial habits. You don’t have to live in the United States to access them. To earn some extra money, consider becoming a consultant, Xu says. “Consulting can be an excellent bridge to bring in income while working to secure the next job when facing the situation that your job search may take months or longer,” he says. “For some, consulting can be the next job.” IEEE-USA’s consultants web page offers a number of services. For example, members can find an assignment by registering their name in the IEEE-USA Consultant Finder. Those who want to network with other consultants can use the site to search for them by state or by IEEE’s U.S. geographic regions. The website also offers resources to help consultants succeed, such as e-books, newsletters, and webinars. To determine how much to charge a client, the IEEE-USA Salary Service provides information from IEEE’s U.S. members about their compensation and other details. IEEE Collabratec’s Consultants Exchange offers networking workshops, educational webinars, and more. If you are financially able and have the right ideas and expertise, Xu says, another option might be to launch your own company. The IEEE Entrepreneurship program offers a variety of resources for founders. Its IEEE Entrepreneurship Exchange is a community of tech startups, investors, and venture capital organizations that discuss and develop entrepreneurial ideas and endeavors. There’s also a mentorship program, in which founders can get advice from an experienced entrepreneur. The benefits of networking and social media Don’t overlook the power of networking in finding a job, Xu advises. “You need to reach out to as many people as possible,” he says. You’re likely to meet people who could help you at your IEEE chapter or section meetings and at IEEE conferences, Xu says. “You will be surprised about how many contacts you can meet who might help you find a job, mentor you, or give you information about a company that might be hiring,” he says. Take advantage of LinkedIn and other professional social media outlets, Xu suggests. He adds that you should let your followers know you are looking for a position. If you are knowledgeable about a specific topic, he encourages posting your thoughts about it to display your expertise to prospective employers. Consider joining the IEEE Collabratec networking platform. Members have access to IEEE’s membership directory, where they can find contacts who might help them find a job. They also can join communities of members who are working in their technical areas, such as artificial intelligence, consumer technology, and the Internet of Things. Relocation can be an adventure If you are still having a hard time finding a job, consider moving to a different region of your country—or to another country—where jobs are more plentiful, Xu says. “Relocating,” he says, “may open up whole new opportunities or adventures that are fulfilling to you or your family.”
- Robot Passes Turing Test for Polyculture Gardeningby Evan Ackerman on 28. Maja 2023. at 16:00
I love plants. I am not great with plants. I have accepted this fact and have therefore entrusted the lives of all of the plants in my care to robots. These aren’t fancy robots: They’re automated hydroponic systems that take care of water and nutrients and (fake) sunlight, and they do an amazing job. My plants are almost certainly happier this way, and therefore I don’t have to feel guilty about my hands-off approach. This is especially true that there is now data from roboticists at the University of California, Berkeley, to back up the assertion that robotic gardeners can do just as good a job as even the best human gardeners can. In fact, in some metrics, the robots can do even better. In 1950, Alan Turing considered the question “Can Machines Think?” and proposed a test based on comparing human versus machine ability to answer questions. In this paper, we consider the question “Can Machines Garden?” based on comparing human versus machine ability to tend a real polyculture garden. UC Berkeley has a long history of robotic gardens, stretching back to at least the early ’90s. And (as I have experienced) you can totally tend a garden with a robot. But the real question is this: Can you usefully tend a garden with a robot in a way that is as effective as a human tending that same garden? Time for some SCIENCE! AlphaGarden is a combination of a commercial gantry robot farming system and UC Berkeley’s AlphaGardenSim, which tells the robot what to do to maximize plant health and growth. The system includes a high-resolution camera and soil moisture sensors for monitoring plant growth, and everything is (mostly) completely automated, from seed planting to drip irrigation to pruning. The garden itself is somewhat complicated, since it’s a polyculture garden (meaning of different plants). Polyculture farming mimics how plants grow in nature; its benefits include pest resilience, decreased fertilization needs, and improved soil health. But since different plants have different needs and grow in different ways at different rates, polyculture farming is more labor-intensive than monoculture, which is how most large-scale farming happens. To test AlphaGarden’s performance, the UC Berkeley researchers planted two side-by-side farming plots with the same seeds at the same time. There were 32 plants in total, including kale, borage, Swiss chard, mustard greens, turnips, arugula, green lettuce, cilantro, and red lettuce. Over the course of two months, AlphaGarden tended its plot full time, while professional horticulturalists tended the plot next door. Then, the experiment was repeated, except that AlphaGarden was allowed to stagger the seed planting to give slower-growing plants a head start. A human did have to help the robot out with pruning from time to time, but just to follow the robot’s directions when the pruning tool couldn’t quite do what the robot wanted it to do. The robot and the professional human both achieved similar results in their garden plots.UC Berkeley The results of these tests showed that the robot was able to keep up with the professional human in terms of both overall plant diversity and coverage. In other words, stuff grew just as well when tended by the robot as it did when tended by a professional human. The biggest difference is that the robot managed to keep up while using 44 percent less water: several hundred liters less over two months. “AlphaGarden has thus passed the Turing test for gardening,” the researchers say. They also say that “much remains to be done,” mostly by improving the AlphaGardenSim plant-growth simulator to further optimize water use, although there are other variables to explore like artificial light sources. The future here is a little uncertain, though—the hardware is pretty expensive, and human labor is (relatively) cheap. Expert human knowledge is not cheap, of course. But for those of us who are very much nonexperts, I could easily imagine mounting some cameras above my garden and installing some sensors and then just following the orders of the simulator about where and when and how much to water and prune. I’m always happy to donate my labor to a robot that knows what it’s doing better than I do. “Can Machines Garden? Systematically Comparing the AlphaGarden vs. Professional Horticulturalists,” by Simeon Adebola, Rishi Parikh, Mark Presten, Satvik Sharma, Shrey Aeron, Ananth Rao, Sandeep Mukherjee, Tomson Qu, Christina Wistrom, Eugen Solowjow, and Ken Goldberg from UC Berkeley, will be presented at ICRA 2023 in London.
- The Relay That Changed the Power Industryby Qusi Alqarqaz on 27. Maja 2023. at 18:00
For more than a century, utility companies have used electromechanical relays to protect power systems against damage that might occur during severe weather, accidents, and other abnormal conditions. But the relays could neither locate the faults nor accurately record what happened. Then, in 1977, Edmund O. Schweitzer III invented the digital microprocessor-based relay as part of his doctoral thesis. Schweitzer’s relay, which could locate a fault within the radius of 1 kilometer, set new standards for utility reliability, safety, and efficiency. Edmund O. Schweitzer III Employer: Schweitzer Engineering Laboratories Title: President and CTO Member grade: Life Fellow Alma maters: Purdue University, West Lafayette, Ind.; Washington State University, Pullman To develop and manufacture his relay, he launched Schweitzer Engineering Laboratories in 1982 from his basement in Pullman, Wash. Today SEL manufactures hundreds of products that protect, monitor, control, and automate electric power systems in more than 165 countries. Schweitzer, an IEEE Life Fellow, is his company’s president and chief technology officer. He started SEL with seven workers; it now has more than 6,000. The 40-year-old employee-owned company continues to grow. It has four manufacturing facilities in the United States. Its newest one, which opened in March in Moscow, Idaho, fabricates printed circuit boards. Schweitzer has received many accolades for his work, including the 2012 IEEE Medal in Power Engineering. In 2019 he was inducted into the U.S. National Inventors Hall of Fame. Advances in power electronics Power system faults can happen when a tree or vehicle hits a power line, a grid operator makes a mistake, or equipment fails. The fault shunts extra current to some parts of the circuit, shorting it out. If there is no proper scheme or device installed with the aim of protecting the equipment and ensuring continuity of the power supply, an outage or blackout could propagate throughout the grid. Overcurrent is not the only damage that can occur, though. Faults also can change voltages, frequencies, and the direction of current. A protection scheme should quickly isolate the fault from the rest of the grid, thus limiting damage on the spot and preventing the fault from spreading to the rest of the system. To do that, protection devices must be installed. That’s where Schweitzer’s digital microprocessor-based relay comes in. He perfected it in 1982. It later was commercialized and sold as the SEL-21 digital distance relay/fault locator. Inspired by a blackout and a protective relays book Schweitzer says his relay was, in part, inspired by an event that took place during his first year of college. “Back in 1965, when I was a freshman at Purdue University, a major blackout left millions without power for hours in the U.S. Northeast and Ontario, Canada,” he recalls. “It was quite an event, and I remember it well. I learned many lessons from it. One was how difficult it was to restore power.” He says he also was inspired by the book Protective Relays: Their Theory and Practice. He read it while an engineering graduate student at Washington State University, in Pullman. “I bought the book on the Thursday before classes began and read it over the weekend,” he says. “I couldn’t put it down. I was hooked. “I realized that these solid-state devices were special-purpose signal processors. They read the voltage and current from the power systems and decided whether the power systems’ apparatuses were operating correctly. I started thinking about how I could take what I knew about digital signal processing and put it to work inside a microprocessor to protect an electric power system.” The 4-bit and 8-bit microprocessors were new at the time. “I think this is how most inventions start: taking one technology and putting it together with another to make new things,” he says. “The inventors of the microprocessor had no idea about all the kinds of things people would use it for. It is amazing.” He says he was introduced to signal processing, signal analysis, and how to use digital techniques in 1968 while at his first job, working for the U.S. Department of Defense at Fort Meade, in Maryland. Faster ways to clear faults and improve cybersecurity Schweitzer continues to invent ways of protecting and controlling electric power systems. In 2016 his company released the SEL-T400L, which samples a power system every microsecond to detect the time between traveling waves moving at the speed of light. The idea is to quickly detect and locate transmission line faults. The relay decides whether to trip a circuit or take other actions in 1 to 2 milliseconds. Previously, it would take a protective relay on the order of 16 ms. A typical circuit breaker takes 30 to 40 ms in high-voltage AC circuits to trip. “The inventors of the microprocessor had no idea about all the kinds of things people would use it for. It is amazing.” “I like to talk about the need for speed,” Schweitzer says. “In this day and age, there’s no reason to wait to clear a fault. Faster tripping is a tremendous opportunity from a point of view of voltage and angle stability, safety, reducing fire risk, and damage to electrical equipment. “We are also going to be able to get a lot more out of the existing infrastructure by tripping faster. For every millisecond in clearing time saved, the transmission system stability limits go up by 15 megawatts. That’s about one feeder per millisecond. So, if we save 12 ms, all of the sudden we are able to serve 12 more distribution feeders from one part of one transmission system.” The time-domain technology also will find applications in transformer and distribution protection schemes, he says, as well as have a significant impact on DC transmission. What excites Schweitzer today, he says, is the concept of energy packets, which he and SEL have been working on. The packets measure energy exchange for all signals including distorted AC systems or DC networks. “Energy packets precisely measure energy transfer, independent of frequency or phase angle, and update at a fixed rate with a common time reference such as every millisecond,” he says. “Time-domain energy packets provide an opportunity to speed up control systems and accurately measure energy on distorted systems—which challenges traditional frequency-domain calculation methods.” He also is focusing on improving the reliability of critical infrastructure networks by improving cybersecurity, situational awareness, and performance. Plug-and-play and best-effort networking aren’t safe enough for critical infrastructure, he says. “SEL OT SDN technology solves some significant cybersecurity problems,” he says, “and frankly, it makes me feel comfortable for the first time with using Ethernet in a substation.” From engineering professor to inventor Schweitzer didn’t start off planning to launch his own company. He began a successful career in academia in 1977 after joining the electrical engineering faculty at Ohio University, in Athens. Two years later, he moved to Pullman, Wash., where he taught at Washington State’s Voiland College of Engineering and Architecture for the next six years. It was only after sales of the SEL-21 took off that he decided to devote himself to his startup full time. It’s little surprise that Schweitzer became an inventor and started his own company, as his father and grandfather were inventors and entrepreneurs. His grandfather, Edmund O. Schweitzer, who held 87 patents, invented the first reliable high-voltage fuse in collaboration with Nicholas J. Conrad in 1911, the year the two founded Schweitzer and Conrad—today known as S&C Electric Co.—in Chicago. Schweitzer’s father, Edmund O. Schweitzer Jr., had 208 patents. He invented several line-powered fault-indicating devices, and he founded the E.O. Schweitzer Manufacturing Co. in 1949. It is now part of SEL. Schweitzer says a friend gave him the best financial advice he ever got about starting a business: Save your money. “I am so proud that our 6,000-plus-person company is 100 percent employee-owned,” Schweitzer says. “We want to invest in the future, so we reinvest our savings into growth.” He advises those who are planning to start a business to focus on their customers and create value for them. “Unleash your creativity,” he says, “and get engaged with customers. Also, figure out how to contribute to society and make the world a better place.”
- Wakka Wakka! This Turing Machine Plays Pac-Manby Matthew Regan on 27. Maja 2023. at 13:00
As I read the newest papers about DNA-based computing, I had to confront a rather unpleasant truth. Despite being a geneticist who also majored in computer science, I was struggling to bridge two concepts—the universal Turing machine, the very essence of computing, and the von Neumann architecture, the basis of most modern CPUs. I had written C++ code to emulate the machine described in Turing’s 1936 paper, and could use it to decide, say, if a word was a palindrome. But I couldn’t see how such a machine—with its one-dimensional tape memory and ability to look at only one symbol on that tape at a time—could behave like a billion-transistor processor with hardware features such as an arithmetic logic unit (ALU), program counter, and instruction register. I scoured old textbooks and watched online lectures about theoretical computer science, but my knowledge didn’t advance. I decided I would build a physical Turing machine that could execute code written for a real processor. Rather than a billion-transistor behemoth, I thought I’d target the humble 8-bit 6502 microprocessor. This legendary chip powered the computers I used in my youth. And as a final proof, my simulated processor would have to run Pac-Man, specifically the version of the game written for the Apple II computer. In Turing’s paper, his eponymous machine is an abstract concept with infinite memory. Infinite memory isn’t possible in reality, but physical Turing machines can be built with enough memory for the task at hand. The hardware implementation of a Turing machine can be organized around a rule book and a notepad. Indeed, when we do basic arithmetic, we use a rule book in our head (such as knowing when to carry a 1). We manipulate numbers and other symbols using these rules, stepping through the process for, say, long division. There are key differences, though. We can move all over a two-dimensional notepad, doing a scratch calculation in the margin before returning to the main problem. With a Turing machine we can only move left or right on a one-dimensional notepad, reading or writing one symbol at a time. A key revelation for me was that the internal registers of the 6502 could be duplicated sequentially on the one-dimensional notepad using four symbols—0, 1, _ (or space), and $. The symbols 0 and 1 are used to store the actual binary data that would sit in a 6502’s register. The $ symbol is used to delineate different registers, and the _ symbol acts as a marker, making it easy to return to a spot in memory we’re working with. The main memory of the Apple II is emulated in a similar fashion. Apart from some flip-flops, a couple of NOT gates, and an up-down counter, the PureTuring machine uses only RAM and ROM chips—there are no logic chips. An Arduino board [bottom] monitors the RAM to extract display data. James Provost Programming a CPU is all about manipulating the registers and transferring their contents to and from main memory using an instruction set. I could emulate the 6502’s instructions as chains of rules that acted on the registers, symbol by symbol. The rules are stored in a programmable ROM, with the output of one rule dictating the next rule to be used, what should be written on the notepad (implemented as a RAM chip), and whether we should read the next symbol or the previous one. I dubbed my machine PureTuring. The ROM’s data outputs are connected to set of flip-flops. Some of the flip-flops are connected to the RAM, to allow the next or previous symbol to be fetched. Others are connected to the ROM’s own address lines in a feedback loop that selects the next rule. It turned out to be more efficient to interleave the bits of some registers rather than leaving them as separate 8-bit chunks. Creating the rule book to implement the 6502’s instruction set required 9,000 rules. Of these, 2,500 were created using an old-school method of writing them on index cards, and the rest were generated by a script. Putting this together took about six months. Only some of the 6502 registers are exposed to programmers [green]; its internal, hidden registers [purple] are used to execute instructions. Below each register a how the registers are arranged, and sometime interleaved, on the PureTuring’s “tape.”James Provost To fetch a software instruction, PureTuring steps through the notepad using $ symbols as landmarks until it gets to the memory location pointed to by the program counter. The 6502 opcodes are one byte long, so by the time the eighth bit is read, PureTuring is in one of 256 states. Then PureTuring returns to the instruction register and writes the opcode there, before moving on to perform the instruction. A single instruction can take up to 3 million PureTuring clock cycles to fetch, versus one to six cycles for the actual 6502! The 6502 uses a memory-mapped input/output system. This means that devices such as displays are represented as locations somewhere within main memory. By using an Arduino to monitor the part of the notepad that corresponds to the Apple II’s graphics memory, I could extract pixels and show them on an attached terminal or screen. This required writing a “dewozzing” function for the Arduino as the Apple II’s pixel data is laid out in a complex scheme. ( Steve Wozniak created this scheme to enable the Apple II to fake an analog color TV signal with digital chips and keep the dynamic RAM refreshed.) I could have inserted input from a keyboard into the notepad in a similar fashion, but I didn’t bother because actually playing Pac-Man on the PureTuring would require extraordinary patience: It took about 60 hours just to draw one frame’s worth of movement for the Pac-Man character and the pursuing enemy ghosts. A modification that moved the machine along the continuum toward a von Neumann architecture added circuitry to permit random access to a notepad symbol, making it unnecessary to step through all prior symbols. This adjustment cut the time to draw the game characters to a mere 20 seconds per frame! PureTuring Part 1. Turing Machine Microprocessor. www.youtube.com Looking forward, features can be added one by one, moving piecemeal from a Turing machine to a von Neumann architecture: Widen the bus to read eight symbols at a time instead of one, replace the registers in the notepad with hardware registers, add an ALU, and so on. Now when I read papers and articles on DNA-based computing, I can trace each element back to something in a Turing machine or forward to a conventional architecture, running my own little mental machine along a conceptual tape!
- Video Friday: The Coolest Robotsby Evan Ackerman on 26. Maja 2023. at 15:18
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ICRA 2023: 29 May–2 June 2023, LONDON Energy Drone & Robotics Summit: 10–12 June 2023, HOUSTON RoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCE RSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREA IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREA IROS 2023: 1–5 October 2023, DETROIT CLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZIL Humanoids 2023: 12–14 December 2023, AUSTIN, TEXAS Enjoy today’s videos! We’ve just relaunched the IEEE Robots Guide over at RobotsGuide.com, featuring new robots, new interactives, and a complete redesign from the ground up. Tell your friends, tell your family, and explore nearly 250 robots in pictures and videos and detailed facts and specs, with lots more on the way! [Robots Guide] The qualities that make a knitted sweater comfortable and easy to wear are the same things that might allow robots to better interact with humans. RobotSweater, developed by a research team from Carnegie Mellon University’s Robotics Institute, is a machine-knitted textile “skin” that can sense contact and pressure. RobotSweater’s knitted fabric consists of two layers of conductive yarn made with metallic fibers to conduct electricity. Sandwiched between the two is a net-like, lace-patterned layer. When pressure is applied to the fabric—say, from someone touching it—the conductive yarn closes a circuit and is read by the sensors. In their research, the team demonstrated that pushing on a companion robot outfitted in RobotSweater told it which way to move or what direction to turn its head. When used on a robot arm, RobotSweater allowed a push from a person’s hand to guide the arm’s movement, while grabbing the arm told it to open or close its gripper. In future research, the team wants to explore how to program reactions from the swipe or pinching motions used on a touchscreen. [CMU] DEEP Robotics Co. yesterday announced that it has launched the latest version of its Lite3 robotic dog in Europe. The system combines advanced mobility and an open modular structure to serve the education, research, and entertainment markets, said the Hangzhou, China–based company. Lite3’s announced price is US $2,900. It ships in September. [Deep Robotics] Estimating terrain traversability in off-road environments requires reasoning about complex interaction dynamics between the robot and these terrains. We propose a method that learns to predict traversability costmaps by combining exteroceptive environmental information with proprioceptive terrain interaction feedback in a self-supervised manner. We validate our method in multiple short- and large-scale navigation tasks on a large, autonomous all-terrain vehicle (ATV) on challenging off-road terrains, and demonstrate ease of integration on a separate large ground robot. This work will be presented at the IEEE International Conference on Robotics and Automation (ICRA 2023) in London next week. [Mateo Guaman Castro] Thanks, Mateo! Sheet Metal Workers’ Local Union 104 has introduced a training course on automating and innovating field layout with the Dusty Robotics FieldPrinter system. [Dusty Robotics] Apptronik has half of its general-purpose robot ready to go! The other half is still a work in progress, but here’s progress: [Apptronik] A spotted-lanternfly-murdering robot is my kind of murdering robot. [FRC] ANYmal is rated IP67 for water resistance, but this still terrifies me. [ANYbotics] Check out the impressive ankle action on this humanoid walking over squishy terrain. [CNRS-AIST JRL] Wing’s progress can be charted along the increasingly dense environments in which we’ve been able to operate: from rural farms to lightly populated suburbs to more dense suburbs to large metropolitan areas like Brisbane, Australia; Helsinki, Finland; and the Dallas Fort Worth metro area in Texas. Earlier this month, we did a demonstration delivery at Coors Field–home of the Colorado Rockies–delivering beer (Coors of course) and peanuts to the field. Admittedly, it wasn’t on a game day, but there were 1,000 people in the stands enjoying the kickoff party for AUVSI’s annual autonomous systems conference. [ Wing ] Pollen Robotics’ team will be going to ICRA 2023 in London! Come and meet us there to try teleoperating Reachy by yourself and give us your feedback! [ Pollen Robotics ] The most efficient drone engine is no engine at all. [ MAVLab ] Is your robot spineless? Should it be? Let’s find out. [ UPenn ] Looks like we’re getting closer to that robot butler. [ Prisma Lab ] This episode of the Robot Brains podcast features Raff D’Andrea, from Kiva, Verity, and ETH Zurich. [ Robot Brains ]
- Who’s the Coolest Robot of All?by Randi Klett on 25. Maja 2023. at 22:18
Calling all robot fanatics! We are the creators of the Robots Guide, IEEE’s interactive site about robotics, and we need your help. Today, we’re expanding our massive catalog to nearly 250 robots, and we want your opinion to decide which are the coolest, most wanted, and also creepiest robots out there. To submit your votes, find robots on the site that are interesting to you and rate them based on their design and capabilities. Every Friday, we’ll crunch the votes to update our Robot Rankings. Rate this robot: For each robot on the site, you can submit your overall rating, answer if you’d want to have this robot, and rate its appearance.IEEE Spectrum May the coolest (or creepiest) robot win! Our collection currently features 242 robots, including humanoids, drones, social robots, underwater vehicles, exoskeletons, self-driving cars, and more. The Robots Guide features three rankings: Top Rated, Most Wanted, and Creepiest.IEEE Spectrum You can explore the collection by filtering robots by category, capability, and country, or sorting them by name, year, or size. And you can also search robots by keywords. In particular, check out some of the new additions, which could use more votes. These include some really cool robots like LOVOT, Ingenuity, GITAI G1, Tertill, Salto, Proteus, and SlothBot. Each robot profile includes detailed tech specs, photos, videos, history, and some also have interactives that let you move and spin robots 360 degrees on the screen. And note that these are all real-world robots. If you’re looking for sci-fi robots, check out our new Face-Off: Sci-Fi Robots game. Robots Redesign Today, we’re also relaunching the Robots Guide site with a fast and sleek new design, more sections and games, and thousands of photos and videos. The new site was designed by Pentagram, the prestigious design consultancy, in collaboration with Standard, a design and technology studio. The site is built as a modern, fully responsive Web app. It’s powered by Remix.run, a React-based Web framework, with structured content by Sanity.io and site search by Algolia. More highlights: Explore nearly 250 robots Make robots move and spin 360 degrees View over 1,000 amazing photos Watch 900 videos of robots in action Play the Sci-Fi Robots Face-Off game Keep up to date with daily robot news Read detailed tech specs about each robot Robot Rankings: Top Rated, Most Wanted, Creepiest The Robots Guide was designed for anyone interested in learning more about robotics, including robot enthusiasts, both experts and beginners, researchers, entrepreneurs, STEM educators, teachers, and students. The foundation for the Robots Guide is IEEE’s Robots App, which was downloaded 1.3 million times and is used in classrooms and STEM programs all over the world. The Robots Guide is an editorial product of IEEE Spectrum, the world’s leading technology and engineering magazine and the flagship publication of the IEEE. Thank you to the IEEE Foundation and our sponsors for their support, which enables all of the Robots Guide content to be open and free to everyone.
- Meet the Forksheet: Imec’s In-Between Transistorby Samuel K. Moore on 25. Maja 2023. at 16:00
The most advanced manufacturers of computer processors are in the middle of the first big change in device architecture in a decade—the shift from finFETs to nanosheets. Another 10 years should bring about another fundamental change, where nanosheet devices are stacked atop each other to form complementary FETs (CFETs), capable of cutting the size of some circuits in half. But the latter move is likely to be a heavy lift, say experts. An in-between transistor called the forksheet might keep circuits shrinking without quite as much work. The idea for the forksheet came from exploring the limits of the nanosheet architecture, says Julien Ryckaert, the vice president for logic technologies at Imec. The nanosheet’s main feature is its horizontal stacks of silicon ribbons surrounded by its current-controlling gate. Although nanosheets only recently entered production, experts were already looking for their limits years ago. Imec was tasked with figuring out “at what point nanosheet will start tanking,” he says. Ryckaert’s team found that one of the main limitations to shrinking nanosheet-based logic is keeping the separation between the two types of transistor that make up CMOS logic. The two types—NMOS and PMOS—must maintain a certain distance to limit capacitance that saps the devices’ performance and power consumption. “The forksheet is a way to break that limitation,” Ryckaert says. Instead of individual nanosheet devices, the forksheet scheme builds them as pairs on either side of a dielectric wall. (No, it doesn’t really resemble a fork much.) The wall allows the devices to be placed closer together without causing a capacitance problem, says Naoto Horiguchi, the director of CMOS technology at Imec. Designers could use the extra space to shrink logic cells, or they could use the extra room to build transistors with wider sheets leading to better performance, he says. Leading-edge transistors are already transitioning from the fin field-effect transistor (FinFET) architecture to nanosheets. The ultimate goal is to stack two devices atop each other in a CFET configuration. The forksheet may be an intermediary step on the way.Imec “CFET is probably the ultimate CMOS architecture,” says Horiguchi of the device that Imec expects to reach production readiness around 2032. But he adds that CFET “integration is very complex.” Forksheet reuses most of the nanosheet production steps, potentially making it an easier job, he says. Imec predicts it could be ready around 2028. There are still many hurdles to leap over, however. “It’s more complex than initially thought,” Horiguchi says. From a manufacturing perspective, the dielectric wall is a bit of a headache. There are several types of dielectric used in advanced CMOS and several steps that involve etching it away. Making forksheets means etching those others without accidentally attacking the wall. And it’s still an open question which types of transistor should go on either side of the wall, Horiguchi says. The initial idea was to put PMOS on one side and NMOS on the other, but there may be advantages to putting the same type on both sides instead.
- Keeping Moore’s Law Going Is Getting Complicatedby Samuel K. Moore on 24. Maja 2023. at 16:10
There was a time, decades really, when all it took to make a better computer chip were smaller transistors and narrower interconnects. That time’s long gone now, and although transistors will continue to get a bit smaller, simply making them so is no longer the point. The only way to keep up the exponential pace of computing now is a scheme called system technology co-optimization, or STCO, argued researchers at ITF World 2023 last week in Antwerp, Belgium. It’s the ability to break chips up into their functional components, use the optimal transistor and interconnect technology for each function, and stitch them back together to create a lower-power, better-functioning whole. “This leads us to a new paradigm for CMOS,” says Imec R&D manager Marie Garcia Bardon. CMOS 2.0, as the Belgium-based nanotech research organization is calling it, is a complicated vision. But it may be the most practical way forward, and parts of it are already evident in today’s most advanced chips. How we got here In a sense, the semiconductor industry was spoiled by the decades prior to about 2005, says Julien Ryckaert, R&D vice president at Imec. During that time, chemists and device physicists were able to regularly produce a smaller, lower-power, faster transistor that could be used for every function on a chip and that would lead to a steady increase in computing capability. But the wheels began to come off that scheme not long thereafter. Device specialists could come up with excellent new transistors, but those transistors weren’t making better, smaller circuits, such as the SRAM memory and standard logic cells that make up the bulk of CPUs. In response, chipmakers began to break down the barriers between standard cell design and transistor development. Called design technology co-optimization, or DTCO, the new scheme led to devices designed specifically to make better standard cells and memory. But DTCO isn’t enough to keep computing going. The limits of physics and economic realities conspired to put barriers in the path to progressing with a one-size-fits-all transistor. For example, physical limits have prevented CMOS operating voltages from decreasing below about 0.7 volts, slowing down progress in power consumption, explains Anabela Veloso, principal engineer at Imec. Moving to multicore processors helped ameliorate that issue for a time. Meanwhile, input-output limits meant it became more and more necessary to integrate the functions of multiple chips onto the processor. So in addition to a system-on-chip (SoC) having multiple instances of processor cores, they also integrate network, memory, and often specialized signal-processing cores. Not only do these cores and functions have different power and other needs, they also can’t be made smaller at the same rate. Even the CPU’s cache memory, SRAM, isn’t scaling down as quickly as the processor’s logic. System technology co-optimization Getting things unstuck is as much a philosophical shift as a collection of technologies. According to Ryckaert, STCO means looking at a system-on-chip as a collection of functions, such as power supply, I/O, and cache memory. “When you start reasoning about functions, you realize that an SoC is not this homogeneous system, just transistors and interconnect,” he says. “It is functions, which are optimized for different purposes.” Ideally, you could build each function using the process technology best suited to it. In practice, that mostly means building each on its own sliver of silicon, or chiplet. Then you would bind those together using technology, such as advanced 3D stacking, so that all the functions act as if they were on the same piece of silicon. Examples of this thinking are already present in advanced processors and AI accelerators. Intel’s high-performance computing accelerator Ponte Vecchio (now called Intel Data Center GPU Max) is made up of 47 chiplets built using two different processes, each from both Intel and Taiwan Semiconductor Manufacturing Co. AMD already uses different technologies for the I/O chiplet and compute chiplets in its CPUs, and it recently began separating out SRAM for the compute chiplet’s high-level cache memory. Imec’s road map to CMOS 2.0 goes even further. The plan requires continuing to shrink transistors, moving power and possibly clock signals beneath a CPU’s silicon, and ever-tighter 3D-chip integration. “We can use those technologies to recognize the different functions, to disintegrate the SoC, and reintegrate it to be very efficient,” says Ryckaert. Transistors will change form over the coming decade, but so will the metal that connects them. Ultimately, transistors could be stacked-up devices made of 2D semiconductors instead of silicon. Power delivery and other infrastructure could be layered beneath the transistors.Imec Continued transistor scaling Major chipmakers are already transitioning from the FinFET transistors that powered the last decade of computers and smartphones to a new architecture, nanosheet transistors [see “The Nanosheet Transistor Is the Next (and Maybe Last) Step in Moore’s Law”]. Ultimately, two nanosheet transistors will be built atop each other to form the complementary FET, or CFET, which Velloso says “represents the ultimate in CMOS scaling” [see “3D-Stacked CMOS Takes Moore’s Law to New Heights”]. As these devices scale down and change shape, one of the main goals is to drive down the size of standard logic cells. That is typically measured in “track height”—basically, the number of metal interconnect lines that can fit within the cell. Advanced FinFETs and early nanosheet devices are six-track cells. Moving to five tracks may require an interstitial design called a forksheet, which squeezes devices together more closely without necessarily making them smaller. CFETs will then reduce cells to four tracks or possibly fewer. Leading-edge transistors are already transitioning from the fin field-effect transistor (FinFET) architecture to nanosheets. The ultimate goal is to stack two devices atop each other in a CFET configuration. The forksheet may be an intermediary step on the way.Imec According to Imec, chipmakers will be able to produce the finer features needed for this progression using ASML’s next generation of extreme-ultraviolet lithography. That tech, called high-numerical-aperture EUV, is under construction at ASML now, and Imec is next in line for delivery. Increasing numerical aperture, an optics term related to the range of angles over which a system can gather light, leads to more precise images. Backside power-delivery networks The basic idea in backside power-delivery networks is to remove all the interconnects that send power—as opposed to data signals—from above the silicon surface and place them below it. This should allow for less power loss, because the power delivering interconnects can be larger and less resistant. It also frees up room above the transistor layer for signal-carrying interconnects, possibly leading to more compact designs [see “Next-Gen Chips Will Be Powered From Below”]. In the future, even more could be moved to the backside of the silicon. For example, so-called global interconnects—those that span (relatively) great distances to carry clock and other signals—could go beneath the silicon. Or engineers could add active power-delivery devices, such as electrostatic discharge safety diodes. 3D integration There are several ways to do 3D integration, but the most advanced today are wafer-to-wafer and die-to-wafer hybrid bonding [see “3 Ways 3D Chip Tech Is Upending Computing”]. These two provide the highest density of interconnections between two silicon dies. But this method requires that the two dies are designed together, so their functions and interconnect points align, allowing them to act as a single chip, says Anne Jourdain, principal member of the technical staff. Imec R&D is on track to be able to produce millions of 3D connections per square millimeter in the near future. Getting to CMOS 2.0 CMOS 2.0 would take disaggregation and heterogeneous integration to the extreme. Depending on which technologies make sense for the particular applications, it could result in a 3D system that incorporates layers of embedded memory, I/O and power infrastructure, high-density logic, high drive-current logic, and huge amounts of cache memory. Getting to that point will take not just technology development but also the tools and training to discern which technologies would actually improve a system. As Bardon points out, smartphones, servers, machine-learning accelerators, and augmented- and virtual-reality systems all have very different requirements and constraints. What makes sense for one might be a dead end for the other.
- The Electrome: The Next Great Frontier For Biomedical Technologyby Stephen Cass on 23. Maja 2023. at 21:00
Stephen Cass: Welcome to Fixing the Future, an IEEE Spectrum podcast. This episode is brought to you by IEEE Xplore, the digital library with over 6 million technical documents and free search. I’m senior editor Stephen Cass, and today I’m talking with a former Spectrum editor, Sally Adee, about her new book, We Are Electric: The New Science of Our Body’s Electrome. Sally, welcome to the show. Sally Adee: Hi, Stephen. Thank you so much for having me. Cass: It’s great to see you again, but before we get into exactly what you mean by the body’s electrome and so on, I see that in researching this book, you actually got yourself zapped quite a bit in a number of different ways. So I guess my first question is: are you okay? Adee: I mean, as okay as I can imagine being. Unfortunately, there’s no experimental sort of condition and control condition. I can’t see the self I would have been in the multiverse version of myself that didn’t zap themselves. So I think I’m saying yes. Cass: The first question I have then is what is an electrome? Adee: So the electrome is this word, I think, that’s been burbling around the bioelectricity community for a number of years. The first time it was committed to print is a 2016 paper by this guy called Arnold De Loof, a researcher out in Europe. But before that, a number of the researchers I spoke to for this book told me that they had started to see it in papers that they were reviewing. And I think it wasn’t sort of defined consistently always because there’s this idea that seems to be sort of bubbling to the top, bubbling to the surface, that there are these electrical properties that the body has, and they’re not just epiphenomena, and they’re not just in the nervous system. They’re not just action potentials, but that there are electrical properties in every one of our cells, but also at the organ level, potentially at the sort of entire system level, that people are trying to figure out what they actually do. And just as action potentials aren’t just epiphenomena, but actually our control mechanisms, they’re looking at how these electrical properties work in the rest of the body, like in the cells, membrane voltages and skin cells, for example, are involved in wound healing. And there’s this idea that maybe these are an epigenetic variable that we haven’t been able to conscript yet. And there’s such promise in it, but a lot of the research, the problem is that a lot of the research is being done across really far-flung scientific communities, some in developmental biology, some of it in oncology, a lot of it in neuroscience, obviously. But what this whole idea of the electrome is— I was trying to pull this all together because the idea behind the book is I really want people to just develop this umbrella of bioelectricity, call it the electrome, call it bioelectricity, but I kind of want the word electrome to do for bioelectricity research what the word genome did for molecular biology. So that’s basically the spiel. Cass: So I want to surf back to a couple points you raised there, but first off, just for people who might not know, what is an action potential? Adee: So the action potential is the electrical mechanism by which the nervous signal travels, either to actuate motion at the behest of your intent or to gain sensation and sort of perceive the world around you. And that’s the electrical part of the electrochemical nervous impulse. So everybody knows about neurotransmitters at the synapse and— well, not everybody, but probably Spectrum listeners. They know about the serotonin that’s released and all these other little guys. But the thing is you wouldn’t be able to have that release without the movement of charged particles called ions in and out of the nerve cell that actually send this impulse down and allow it to travel at a rate of speed that’s fast enough to let you yank your hand away from a hot stove when you’ve touched it, before you even sort of perceive that you did so. Cass: So that actually brings me to my next question. So you may remember in some Spectrum‘s editorial meetings when we were deciding if a tech story was for us or not, that literally, we would often ask, “Where is the moving electron? Where is the moving electron?” But bioelectricity is not really based on moving electrons. It’s based on these ions. Yeah. So let’s take the neuron as an example. So what you’ve got is— let me do like a— imagine a spherical cow for a neuron, okay? So you’ve got a blob and it’s a membrane, and that separates the inside of your cell from the outside of your cell. And this membrane is studded with tens of thousands, I think, little pores called ion channels. And the pores are not just sieve pores. They’re not inert. They’re really smart. And they decide which ions they like. Now, let’s go to the ions. Ions are suffusing your extracellular fluid, all the stuff that bathes you. It’s basically the reason they say you’re 66 percent water or whatever. This is like sieve water. It’s got sodium, potassium, calcium, etc., and these ions are charged particles. So when you’ve got a cell, it likes potassium, the neuron, it likes potassium, it lets it in. It doesn’t really like sodium so much. It’s got very strong preferences. So in its resting state, which is its happy place, those channels allow potassium ions to enter. And those are probably where the electrons are, actually, because an ion, it’s got a plus-one charge or a minus-one charge based on— but let’s not go too far into it. But basically, the cell allows the potassium to come inside, and its resting state, which is its happy place, the separation of the potassium from the sodium causes, for all sorts of complicated reasons, a charge inside the cell that is minus 70 degree— sorry, minus 70 millivolts with respect to the extracellular fluid. Cass: Before I read your book, I kind of had the idea that how neurons use electricity was, essentially, settled science, very well understood, all kind of squared away, and this was how the body used electricity. But even when it came to neurons, there’s a lot of fundamentals, kind of basic things about how neurons use electricity that we really only established relatively recently. Some of the research you’re talking about is definitely not a century-old kind of basic science about how these things work. Adee: No, not at all. In fact, there was a paper released in 2018 that I didn’t include, which I’m really annoyed by. I just found it recently. Obviously, you can’t find all the papers. But it’s super interesting because it blends that whole sort of ionic basis of the action potential with another thing in my book that’s about how cell development is a little bit like a battery getting charged. Do you know how cells assume an electrical identity that may actually be in charge of the cell fate that they meet? And so we know abou— sorry, the book goes into more detail, but it’s like when a cell is stem or a fertilized egg, it’s depolarized. It’s at zero. And then when it becomes a nerve cell, it goes to that minus 70 that I was talking about before. If it becomes a fat cell, it’s at minus 50. If it’s musculoskeletal tissue, it goes to minus 90. Liver cells are like around minus 40. And so you’ve got real identitarian diversity, electrical diversity in your tissues, which has something to do with what they end up doing in the society of cells. So this paper that I was talking about, the 2018 paper, they actually looked at neurons. This was work from Denis Jabaudon at the University of Geneva, and they were looking at how neurons actually differentiate. Because when baby neurons are born-- your brain is made of all kinds of cells. It’s not just cortical cells. There’s staggering variety of classes of neurons. And as cells actually differentiate, you can watch their voltage change, just like you can do in the rest of the body with these electrosensitive dyes. So that’s an aspect of the brain that we hadn’t even realized until 2018. Cass: And that all leads me to my next point, which is if you think bioelectricity, we think, okay, nerves zapping around. But neurons are not the only bioelectric network in the body. So talk about some of the other sorts of electrical networks we have, completely, or are largely separate from our neural networks? Adee: Well, so Michael Levin is a professor at Tufts University. He does all kinds of other stuff, but mainly, I guess, he’s like the Paul Erdos of bioelectricity, I like to call him, because he’s sort of the central node. He’s networked into everybody, and I think he’s really trying to, again, also assemble this umbrella of bioelectricity to study this all in the aggregate. So his idea is that we are really committed to this idea of bioelectricity being in charge of our sort of central communications network, the way that we understand the environment around us and the way that we understand our ability to move and feel within it. But he thinks that bioelectricity is also how— that the nervous system kind of hijacked this mechanism, which is way older than any nervous system. And he thinks that we have another underlying network that is about our shape, and that this is bioelectrically mediated in really important ways, which impacts development, of course, but also wound healing. Because if you think about the idea that your body understands its own shape, what happens when you get a cut? How does it heal it? It has to go back to some sort of memory of what its shape is in order to heal it over. In animals that regenerate, they have a completely different electrical profile after they’ve been—so after they’ve had an arm chopped off. So it’s a very different electrical— yeah, it’s a different electrical process that allows a starfish to regrow a limb than the one that allows us to scar over. So you’ve got this thing called a wound current. Your skin cells are arranged in this real tight wall, like little soldiers, basically. And what’s important is that they’re polarized in such a way that if you cut your skin, all the sort of ions flow out in a certain way, which creates this wound current, which then generates an electric field, and the electric field acts like a beacon. It’s like a bat signal, right? And it guides in these little helper cells, the macrophages that come and gobble up the mess and the keratinocytes and the guys who build it back up again and scar you over. And it starts out strong, and as you scar over, as the wound heals, it very slowly goes away. By the time the wound is healed, there’s no more field. And what was super interesting is this guy, Richard Nuccitelli, invented this thing called the Dermacorder that’s able to sense and evaluate the electric field. And he found that in people over the age of 65, the wound field is less than half of what it is in people under 25. And that actually goes in line with another weird thing about us, which is that our bioelectricity— or sorry, our regeneration capabilities are time-dependent and tissue-dependent. So you probably know that the intestinal tissue regenerates all the time. You’re going to digest next week’s food with totally different cells than this morning’s food. But also, we’re time-dependent because when we’re just two cells, if you cleave that in half, you get identical twins. Later on during fetal development, it’s totally scarless, which is something we found out, because when we started being able to do fetal surgery in the womb, it was determined that we heal, basically, scarlessly. Then we’re born, and then between the ages of 7 and 11— until we are between the ages of 7 and 11, you chop off a fingertip, it regenerates perfectly, including the nail, but we lose that ability. And so it seems like the older we get, the less we regenerate. And so they’re trying to figure out now how— various programs are trying to figure out how to try to take control of various aspects of our sort of bioelectrical systems to do things like radically accelerate healing, for example, or how to possibly re-engage the body’s developmental processes in order to regenerate preposterous things like a limb. I mean, it sounds preposterous now. Maybe in 20 years, it’ll just be. Cass: I want to get into some of the technologies that people are thinking of building on this sort of new science. Part of it is that the history of this field, both scientifically and technologically, has really been plagued by the shadow of quackery. And can you talk a little bit about this and how, on the one hand, there’s been some things we’re very glad that we stopped doing some very bad ideas, but it’s also had this shadow on sort of current research and trying to get real therapies to patients? Adee: Yeah, absolutely. That was actually one of my favorite chapters to write, was the spectacular pseudoscience one, because, I mean, that is so much fun. So it can be boiled down to the fact that we were trigger happy because we see this electricity, we’re super excited about it. We start developing early tools to start manipulating it in the 1700s. And straight away, it’s like, this is an amazing new tool, and there’s all these sort of folk cures out there that we then decide that we’re going to take— not into the clinic. I don’t know what you’d call it, but people just start dispensing this stuff. This is separate from the discovery of endogenous electrical activity, which is what Luigi Galvani famously discovered in the late 1700s. He starts doing this. He’s an anatomist. He’s not an electrician. Electrician, by the way, is what they used to call the sort of literati who were in charge of discovery around electricity. And it had a really different connotation at the time, that they were kind of like the rocket scientists of their day. But Galvani’s just an anatomist, and he starts doing all of these experiments using these new tools to zap frogs in various ways and permutations. And he decides that he has answered a whole different old question, which is how does man’s will animate his hands and let him feel the world around him? And he says, “This is electrical in nature.” This is a long-standing mystery. People have been bashing their heads against it for the past 100, 200 years. But he says that this is electrical, and there’s a big, long fight. I won’t get into too much between Volta, the guy who invented the battery, and Galvani. Volta says, “No, this is not electrical.” Galvani says, “Yes, it is.” But owing to events, when Volta invents the battery, he basically wins the argument, not because Galvani was wrong, but because Volta had created something useful. He had created a tool that people could use to advance the study of all kinds of things. Galvani’s idea that we have an endogenous electrical sort of impulse, it didn’t lead to anything that anybody could use because we didn’t have tools sensitive enough to really measure it. We only sort of had indirect measurements of it. And his nephew, after he dies in ignominy, his nephew decides to bring it on himself to rescue, single-handedly, his uncle’s reputation. The problem is, the way he does it is with a series of grotesque, spectacular experiments. He very famously reanimated— well, zapped until they shivered, the corpses of all these dead guys, dead criminals, and he was doing really intense things like sticking electrodes connected to huge voltaic piles, Proto batteries, into the rectums of dead prisoners, which would make them sit up halfway and point at the people who are assembled, this very titillating stuff. Many celebrities of the time would crowd around these demonstrations. Anyway, so Galvani basically—or sorry, Aldini, the nephew, basically just opens the door to everyone to be like, “Look what we can do with electricity.” Then in short order, there’s a guy who creates something called the Celestial Bed, which is a thing— they’ve got rings, they’ve got electric belts for stimulating the nethers. The Celestial Bed is supposed to help infertile couples. This is how sort of just wild electricity is in those days. It’s kind of like— you know how everybody went crazy for crypto scams last year? Electricity was like the crypto of 1828 or whatever, 1830s. And the Celestial Bed, so people would come and they would pay £9,000 to spend a night in it, right? Well, not at the time. That’s in today’s money. And it didn’t even use electricity. It used the idea of electricity. It was homeopathy, but electricity. You don’t even know where to start. So this is the sort of caliber of pseudoscience, and this is really echoed down through the years. That was in the 1800s. But when people submit papers or grant applications, I heard more than one researchers say to me— people would look at this electric stuff, and they’d be like, “Does anyone still believe this shit?” And it’s like, this is rigorous science, but it’s been just tarnished by the association with this. Cass: So you mentioned wound care, and the book talks about some of the ways [inaudible] would care. But we’re also looking at other really ambitious ideas like regenerating limbs as part of this extension of wound care. And also, you make the point of certainly doing diagnostics and then possibly treatments for things like cancer. In thinking about cancer in a very different way than the really very, very tightly-focused genetic view we have of cancer now, and thinking about it kind of literally in a wider context. So can you talk about that a little bit? Adee: Sure. And I want to start by saying that I went to a lot of trouble to be really careful in the book. I think cancer is one of those things that— I’ve had cancer in my family, and it’s tough to talk about it because you don’t want to give people the idea that there’s a cure for cancer around the corner when this is basic research and intriguing findings because it’s not fair. And I sort of struggled. I thought for a while, like, “Do I even bring this up?” But the ideas behind it are so intriguing, and if there were more research dollars thrown at it or pounds or whatever, Swiss francs, you might be able to really start moving the needle on some of this stuff. The idea is, there are two electrical— oh God, I don’t want to say avenues, but it is unfortunately what I have to do. There are two electrical avenues to pursue in cancer. The first one is something that a researcher called Mustafa Djamgoz at Imperial College here in the UK, he has been studying this since the ‘90s. Because he used to be a neurobiologist. He was looking at vision. And he was talking to some of his oncologist Friends, and they gave him some cancer cell lines, and he started looking at the behavior of cancer cells, the electrical behavior of cancer cells, and he started finding some really weird behaviors. Cancer cells that should not have had anything to do with action potentials, like from prostate cancer lines, when he looked at them, they were oscillating like crazy, as if they were nerves. And then he started looking at other kinds of cancer cells, and they were all oscillating, and they were doing this oscillating behavior. So he spent like seven years sort of bashing his head against the wall. Nobody wanted to listen to him. But now, way more people are now investigating this. There’s going to be an ion channel at Cancer Symposium I think later this month, actually, in Italy. And he found, and a lot of other researchers like this woman, Annarosa Arcangeli, they have found that the reason that cancer cells may have these oscillating properties is that this is how they communicate with each other that it’s time to leave the nest of the tumor and start invading and metastasizing. Separately, there have been very intriguing-- this is really early days. It’s only a couple of years that they’ve started noticing this, but there have been a couple of papers now. People who are on certain kinds of ion channel blockers for neurological conditions like epilepsy, for example, they have cancer profiles that are slightly different from normal, which is that if they do get cancer, they are slightly less likely to die of it. In the aggregate. Nobody should be starting to eat ion channel blockers. But they’re starting to zero in on which particular ion channels might be responsible, and it’s not just one that you and I have. These cancer kinds, they are like a expression of something that normally only exists when we’re developing in the womb. It’s part of the reason that we can grow ourselves so quickly, which of course, makes sense because that’s what cancer does when it metastasizes, it grows really quickly. So there’s a lot of work right now trying to identify how exactly to target these. And it wouldn’t be a cure for cancer. It would be a way to keep a tumor in check. And this is part of a strategy that has been proposed in the UK a little bit for some kinds of cancer, like the triple-negative kind that just keep coming back. Instead of subjecting someone to radiation and chemo, especially when they’re older, sort of just really screwing up their quality of life while possibly not even giving them that much more time. What if instead you sort of tried to treat cancer more like a chronic disease, keep it managed, and maybe that gives a person like 10 or 20 years? That’s a huge amount of time. And while not messing up with their quality of life. This is a whole conversation that’s being had, but that’s one avenue. And there’s a lot of research going on in this right now that may yield fruit sort of soon. The much more sci-fi version of this, the studies have mainly been done in tadpoles, but they’re so interesting. So Michael Levin, again, and his postdoc at the time, I think, Brook Chernet, they were looking at what happens— so it’s uncontroversial that as a cancer cell-- so let’s go back to that society of cells thing that I was talking about. You get fertilized egg, it’s depolarized, zero, but then its membrane voltage charges, and it becomes a nerve cell or skin cell or a fat cell. What’s super interesting is that when those responsible members of your body’s society decide to abscond and say, “Screw this. I’m not participating in society anymore. I’m just going to eat and grow and become cancer,” their membrane voltage also changes. It goes much closer to zero again, almost like it’s having a midlife crisis or whatever. So what they found, what Levin and Chernet found is that you can manipulate those cellular electrics to make the cell stop behaving cancerously. And so they did this in tadpoles. They had genetically engineered the tadpoles to express tumors, but when they made sure that the cells could not depolarize, most of those tadpoles did not express the tumors. And when they later took tadpoles that already had the tumors and they repolarized the voltage, those tumors, that tissue started acting like normal tissue, not like cancer tissue. But again, this is the sci-fi stuff, but the fact that it was done at all is so fascinating, again, from that epigenetic sort of body pattern perspective, right? Cass: So sort of staying with that sci-fi stuff, except this one, even more closer to reality. And this goes back to some of these experiments which you zapped yourself. Can you talk a little bit about some of these sort of device that you can wear which appear to really enhance certain mental abilities? And some of these you [inaudible]. Adee: So the kit that I wore, I actually found out about it while I was at Spectrum, when I was a DARPATech. And this program manager told me about it, and I was really stunned to find out that just by running two milliamps of current through your brain, you would be able to improve your-- well, it’s not that your ability is improved. It was that you could go from novice to expert in half the time that it would take you normally, according to the papers. And so I really wanted to try it. I was trying to actually get an expert feature written for IEEE Spectrum, but they kept ghosting me, and then by the time I got to New Scientist, I was like, fine, I’m just going to do it myself. So they let me come over, and they put this kit on me, and it was this very sort of custom electrodes, these things, they look like big daisies. And this guy had brewed his own electrolyte solution and sort of smashed it onto my head, and it was all very slimy. So I was doing this video game called DARWARS Ambush!, which is just like a training— it’s a shooter simulation to help you with shooting. So it was a Gonzo stunt. It was not an experiment. But he was trying to replicate the conditions of me not knowing whether the electricity was on as much as he could. So he had it sort of behind my back, and he came in a couple of times and would either pretend to turn it on or whatever. And I was practicing and I was really bad at it. That is not my game. Let’s just put it that way. I prefer driving games. But it was really frustrating as well because I never knew when the electricity was on. So I was just like, “There’s no difference. This sucks. I’m terrible.” And that sort of inner sort of buzz kept getting stronger and stronger because I’d also made bad choices. I’d taken a red-eye flight the night before. And I was like, “Why would I do that? Why wouldn’t I just give myself one extra day to recover before I go in and do this really complicated feature where I have to learn about flow state and electrical stimulation?” And I was just getting really tense and just angrier and angrier. And then at one point, he came in after my, I don’t know, 5th or 6th, I don’t know, 400th horrible attempt where I just got blown up every time. And then he turned on the electricity, and I could totally feel that something had happened because I have a little retainer in my mouth just at the bottom. And I was like, “Whoa.” But then I was just like, “Okay. Well, now this is going to suck extra much because I know the electricity is on, so it’s not even a freaking sham condition.” So I was mad. But then the thing started again, and all of a sudden, all the sort of buzzing little angry voices just stopped, and it was so profound. And I’ve talked about it quite a bit, but every time I remember it, I get a little chill because it was the first time I’d ever realized, number one, how pissy my inner voices are and just how distracting they are and how abusive they are. And I was like, “You guys suck, all of you.” But somebody had just put a bell jar between me and them, and that feeling of being free from them was profound. At first, I didn’t even notice because I was just busy doing stuff. And all of a sudden, I was amazing at this game and I dispatched all of the enemies and whatnot, and then afterwards, when they came in, I was actually pissed because I was just like, “Oh, now I get it right and you come in after three minutes. But the last times when I was screwing it up, you left me in there to cook for 20 minutes.” And they were like, “No, 20 minutes has gone by,” which I could not believe. But yeah, it was just a really fairly profound experience, which is what led me down this giant rabbit hole in the first place. Because when I wrote the feature afterwards, all of a sudden I started paying attention to the whole TDCS thing, which I hadn’t yet. I had just sort of been focusing [crosstalk]. Cass: And that’s transcranial—? Adee: Oh sorry, transcranial direct current stimulation. Cass: There you go. Thank you. Sorry. Adee: No. Yeah, it’s a mouthful. But then that’s when I started to notice that quackery we were talking about before. All that history was really informing the discussion around it because people were just like, “Oh, sure. Why don’t you zap your brain with some electricity and you become super smart.” And I was like, “Oh, did I like fall for the placebo effect? What happened here?” And there was this big study from Australia where the guy was just like, “When we average out all of the effects of TDCS, we find that it does absolutely nothing.” Other guys stimulated a cadaver to see if it would even reach the brain tissue and included it wouldn’t. But that’s basically what started me researching the book, and I was able to find answers to all those questions. But of course, TDCS, I mean, it’s finicky just like the electrome. It’s like your living bone is conductive. So when you’re trying to put an electric field on your head, basically, you have to account for things like how thick is that person’s skull in the place that you want to stimulate. They’re still working out the parameters. There have been some really good studies that show sort of under which particular conditions they’ve been able to make it work. It does not work for all conditions for which it is claimed to work. There is some snake oil. There’s a lot left to be done, but a better understanding of how this affects the different layers of the sort of, I guess, call it, electrome, would probably make it something that you could use replicability. Is that a word? But also, that applies to things like deep brain stimulation, which, also, for Parkinson’s, it’s fantastic. But they’re trying to use it for depression, and in some cases, it works so—I want to use a bad word—amazingly. Just Helen Mayberg, who runs these trials, she said that for some people, this is an option of last resort, and then they get the stimulation, and they just get back on the bus. That’s her quote. And it’s like a switch that you flip. And for other people, it doesn’t work at all. Cass: Well the book is packed with even more fantastic stuff, and I’m sorry we don’t have time to go through it, because literally, I could sit here and talk to you all day about this. Adee: I didn’t even get into the frog battery, but okay, that’s fine. Fine, fine skip the frog. Sorry, I’m just kidding. I’m kidding, I’m kidding. Cass: And thank you so much, Sally, for chatting with us today. Adee: Oh, thank you so much. I really love talking about it, especially with you. Cass: Today on Fixing the Future, we’re talking with Sally Adee about her new book on the body’s electrome. For IEEE Spectrum I’m Stephen Cass.
- The Strange Story of the Teens Behind the Mirai Botnetby Scott J. Shapiro on 23. Maja 2023. at 13:00
First-year college students are understandably frustrated when they can’t get into popular upper-level electives. But they usually just gripe. Paras Jha was an exception. Enraged that upper-class students were given priority to enroll in a computer-science elective at Rutgers, the State University of New Jersey, Paras decided to crash the registration website so that no one could enroll. On Wednesday night, 19 November 2014, at 10:00 p.m. EST—as the registration period for first-year students in spring courses had just opened—Paras launched his first distributed denial-of-service (DDoS) attack. He had assembled an army of some 40,000 bots, primarily in Eastern Europe and China, and unleashed them on the Rutgers central authentication server. The botnet sent thousands of fraudulent requests to authenticate, overloading the server. Paras’s classmates could not get through to register. The next semester Paras tried again. On 4 March 2015, he sent an email to the campus newspaper, The Daily Targum: “A while back you had an article that talked about the DDoS attacks on Rutgers. I’m the one who attacked the network.… I will be attacking the network once again at 8:15 pm EST.” Paras followed through on his threat, knocking the Rutgers network offline at precisely 8:15 p.m. On 27 March, Paras unleashed another assault on Rutgers. This attack lasted four days and brought campus life to a standstill. Fifty thousand students, faculty, and staff had no computer access from campus. On 29 April, Paras posted a message on Pastebin, a website popular with hackers for sending anonymous messages. “The Rutgers IT department is a joke,” he taunted. “This is the third time I have launched DDoS attacks against Rutgers, and every single time, the Rutgers infrastructure crumpled like a tin can under the heel of my boot.” Paras was furious that Rutgers chose Incapsula, a small cybersecurity firm based in Massachusetts, as its DDoS-mitigation provider. He claimed that Rutgers chose the cheapest company. “Just to show you the poor quality of Incapsula’s network, I have gone ahead and decimated the Rutgers network (and parts of Incapsula), in the hopes that you will pick another provider that knows what they are doing.” Paras’s fourth attack on the Rutgers network, taking place during finals, caused chaos and panic on campus. Paras reveled in his ability to shut down a major state university, but his ultimate objective was to force it to abandon Incapsula. Paras had started his own DDoS-mitigation service, ProTraf Solutions, and wanted Rutgers to pick ProTraf over Incapsula. And he wasn’t going to stop attacking his school until it switched. A Hacker Forged in Minecraft Paras Jha was born and raised in Fanwood, a leafy suburb in central New Jersey. When Paras was in the third grade, a teacher recommended that he be evaluated for attention deficit hyperactivity disorder, but his parents didn’t follow through. As Paras progressed through elementary school, his struggles increased. Because he was so obviously intelligent, his teachers and parents attributed his lackluster performance to laziness and apathy. His perplexed parents pushed him even harder. Paras sought refuge in computers. He taught himself how to code when he was 12 and was hooked. His parents happily indulged this passion, buying him a computer and providing him with unrestricted Internet access. But their indulgence led Paras to isolate himself further, as he spent all his time coding, gaming, and hanging out with his online friends. Paras was particularly drawn to the online game Minecraft. In ninth grade, he graduated from playing Minecraft to hosting servers. It was in hosting game servers that he first encountered DDoS attacks. Minecraft server administrators often hire DDoS services to knock rivals offline. As Paras learned more sophisticated DDoS attacks, he also studied DDoS defense. As he became proficient in mitigating attacks on Minecraft servers, he decided to create ProTraf Solutions. Paras’s obsession with Minecraft attacks and defense, compounded by his untreated ADHD, led to an even greater retreat from family and school. His poor academic performance in high school frustrated and depressed him. His only solace was Japanese anime and the admiration he gained from the online community of Minecraft DDoS experts. Paras’s struggles deteriorated into paralysis when he enrolled in Rutgers, studying for a B.S. in computer science. Without his mother’s help, he was unable to regulate the normal demands of living on his own. He could not manage his sleep, schedule, or study. Paras was also acutely lonely. So he immersed himself in hacking. Paras and two hacker friends, Josiah White and Dalton Norman, decided to go after the kings of DDoS—a gang known as VDoS. The gang had been providing these services to the world for four years, which is an eternity in cybercrime. The decision to fight experienced cybercriminals may seem brave, but the trio were actually older than their rivals. The VDoS gang members had been only 14 years old when they started to offer DDoS services from Israel in 2012. These 19-year-old American teenagers would be going to battle against two 18-year-old Israeli teenagers. The war between the two teenage gangs would not only change the nature of malware. Their struggle for dominance in cyberspace would create a doomsday machine. Bots for Tots - Here’s how three teenagers built a botnet that could take down the Internet The Mirai botnet, with all its devastating potential, was not the product of an organized-crime or nation-state hacking group—it was put together by three teenage boys. They rented out their botnet to paying customers to do mischief with and used it to attack chosen targets of their own. But the full extent of the danger became apparent only later, after this team made the source code for their malware public. Then others used it to do greater harm: crashing Germany’s largest Internet service provider; attacking Dyn’s Domain Name System servers, making the Internet unusable for millions; and taking down all of Liberia’s Internet—to name a few examples. The Mirai botnet exploited vulnerable Internet of Things devices, such as Web-connected video cameras, ones that supported Telnet, an outdated system for logging in remotely. Owners of these devices rarely updated their passwords, so they could be easily guessed using a strategy called a dictionary attack. The first step in assembling a botnet was to scan random IP addresses looking for vulnerable IoT devices, ones whose passwords could be guessed. Once identified, the addresses of these devices were passed to a “loader,” which would put the malware on the vulnerable device. Infected devices located all over the world could then be used for distributed denial-of-service attacks, orchestrated by a command-and-control (C2) server. When not attacking a target, these bots would be enlisted to scan for more vulnerable devices to infect. Botnet Madness Botnet malware is useful for financially motivated crime because botmasters can tell the bots in their thrall to implant malware on vulnerable machines, send phishing emails, or engage in click fraud, in which botnets profit by directing bots to click pay-per-click ads. Botnets are also great DDoS weapons because they can be trained on a target and barrage it from all directions. One day in February 2000, for example, the hacker MafiaBoy knocked out Fifa.com, Amazon.com, Dell, E-Trade, eBay, CNN, as well as Yahoo, at the time the largest search engine on the Internet. After taking so many major websites offline, MafiaBoy was deemed a national -security threat. President Clinton ordered a national manhunt to find him. In April 2000, MafiaBoy was arrested and charged, and in January 2001 he pled guilty to 58 charges of denial-of-service attacks. Law enforcement did not reveal MafiaBoy’s real name, as this national-security threat was 15 years old. Both MafiaBoy and the VDoS crew were adolescent boys who crashed servers. But whereas MafiaBoy did it for the sport, VDoS did it for the money. Indeed, these teenage Israeli kids were pioneering tech entrepreneurs. They helped launch a new form of cybercrime: DDoS as a service. With it, anyone could now hack with the click of a button, no technical knowledge needed. It might be surprising that DDoS providers could advertise openly on the Web. After all, DDoSing another website is illegal everywhere. To get around this, these “booter services” have long argued they perform a legitimate function: providing those who set up Web pages a means to stress test websites. In theory, such services do play an important function. But only in theory. As a booter-service provider admitted to University of Cambridge researchers, “We do try to market these services towards a more legitimate user base, but we know where the money comes from.” The Botnets of August Paras dropped out of Rutgers in his sophomore year and, with his father’s encouragement, spent the next year focused on building ProTraf Solutions, his DDoS-mitigation business. And just like a mafia don running a protection racket, he had to make that protection needed. After launching four DDoS attacks his freshman year, he attacked Rutgers yet again in September 2015, still hoping that his former school would give up on Incapsula. Rutgers refused to budge. ProTraf Solutions was failing, and Paras needed cash. In May 2016, Paras reached out to Josiah White. Like Paras, Josiah frequented Hack Forums. When he was 15, he developed major portions of Qbot, a botnet worm that at its height in 2014 had enslaved half a million computers. Now 18, Josiah switched sides and worked with his friend Paras at ProTraf doing DDoS mitigation. The hacker’s command-and-control (C2) server orchestrates the actions of many geographically distributed bots (computers under its control). Those computers, which could be IoT devices like IP cameras, can be directed to overwhelm the victim’s servers with unwanted traffic, making them unable to respond to legitimate requests. IEEE Spectrum But Josiah soon returned to hacking and started working with Paras to take the Qbot malware, improve it, and build a bigger, more powerful DDoS botnet. Paras and Josiah then partnered with 19-year-old Dalton Norman. The trio turned into a well-oiled team: Dalton found the vulnerabilities; Josiah updated the botnet malware to exploit these vulnerabilities; and Paras wrote the C2—software for the command-and-control server—for controlling the botnet. But the trio had competition. Two other DDoS gangs—Lizard Squad and VDoS—decided to band together to build a giant botnet. The collaboration, known as PoodleCorp, was successful. The amount of traffic that could be unleashed on a target from PoodleCorp’s botnet hit a record value of 400 gigabits per second, almost four times the rate that any previous botnet had achieved. They used their new weapon to attack banks in Brazil, U.S. government sites, and Minecraft servers. They achieved this firepower by hijacking 1,300 Web-connected cameras. Web cameras tend to have powerful processors and good connectivity, and they are rarely patched. So a botnet that harnesses video has enormous cannons at its disposal. While PoodleCorp was on the rise, Paras, Josiah, and Dalton worked on a new weapon. By the beginning of August 2016, the trio had completed the first version of their botnet malware. Paras called the new code Mirai, after the anime series Mirai Nikki. When Mirai was released, it spread like wildfire. In its first 20 hours, it infected 65,000 devices, doubling in size every 76 minutes. And Mirai had an unwitting ally in the botnet war then raging. Up in Anchorage, Alaska, the FBI cyber unit was building a case against VDoS. The FBI was unaware of Mirai or its war with VDoS. The agents did not regularly read online boards such as Hack Forums. They did not know that the target of their investigation was being decimated. The FBI also did not realize that Mirai was ready to step into the void. The head investigator in Anchorage was Special Agent Elliott Peterson. A former U.S. Marine, Peterson is a calm and self-assured agent with a buzz cut of red hair. At the age of 33, Peterson had returned to his native state of Alaska to prosecute cybercrime. On 8 September 2016, the FBI’s Anchorage and New Haven cyber units teamed up and served a search warrant in Connecticut on the member of PoodleCorp who ran the C2 that controlled all its botnets. On the same day, the Israeli police arrested the VDoS founders in Israel. Suddenly, PoodleCorp was no more. The Mirai group waited a couple of days to assess the battlefield. As far as they could tell, they were the only botnet left standing. And they were ready to use their new power. Mirai won the war because Israeli and American law enforcement arrested the masterminds behind PoodleCorp. But Mirai would have triumphed anyway, as it was ruthlessly efficient in taking control of Internet of Things devices and excluding competing malware. A few weeks after the arrests of those behind VDoS, Special Agent Peterson found his next target: the Mirai botnet. In the Mirai case, we do not know the exact steps that Peterson’s team took in their investigation: Court orders in this case are currently “under seal,” meaning that the court deems them secret. But from public reporting, we know that Peterson’s team got its break in the usual way—from a Mirai victim: Brian Krebs, a cybersecurity reporter whose blog was DDoSed by the Mirai botnet on 25 September. The FBI uncovered the IP address of the C2 and loading servers but did not know who had opened the accounts. Peterson’s team likely subpoenaed the hosting companies to learn the names, emails, cellphones, and payment methods of the account holders. With this information, it would seek court orders and then search warrants to acquire the content of the conspirators’ conversations. Still, the hunt for the authors of the Mirai malware must have been a difficult one, given how clever these hackers were. For example, to evade detection Josiah didn’t just use a VPN. He hacked the home computer of a teenage boy in France and used his computer as the “exit node.” The orders for the botnet, therefore, came from this computer. Unfortunately for the owner, he was a big fan of Japanese anime and thus fit the profile of the hacker. The FBI and the French police discovered their mistake after they raided the boy’s house. Done and Done For After wielding its power for two months, Paras dumped nearly the complete source code for Mirai on Hack Forums. “I made my money, there’s lots of eyes looking at IOT now, so it’s time to GTFO [Get The F*** Out],” Paras wrote. With that code dump, Paras had enabled anyone to build their own Mirai. And they did. Dumping code is reckless, but not unusual. If the police find source code on a hacker’s devices, they can claim that they “downloaded it from the Internet.” Paras’s irresponsible disclosure was part of a false-flag operation meant to throw off the FBI, which had been gathering evidence indicating Paras’s involvement in Mirai and had contacted him to ask questions. Though he gave the agent a fabricated story, getting a text from the FBI probably terrified him. Mirai had captured the attention of the cybersecurity community and of law enforcement. But not until after Mirai’s source code dropped would it capture the attention of the entire United States. The first attack after the dump was on 21 October, on Dyn, a company based in Manchester, N.H., that provides Domain Name System (DNS) resolution services for much of the East Coast of the United States. Mike McQuade It began at 7:07 a.m. EST with a series of 25-second attacks, thought to be tests of the botnet and Dyn’s infrastructure. Then came the sustained assaults: of one hour, and then five hours. Interestingly, Dyn was not the only target. Sony’s PlayStation video infrastructure was also hit. Because the torrents were so immense, many other websites were affected. Domains such as cnn.com, facebook.com, and nytimes.com wouldn’t work. For the vast majority of these users, the Internet became unusable. At 7:00 p.m., another 10-hour salvo hit Dyn and PlayStation. Further investigations confirmed the point of the attack. Along with Dyn and PlayStation traffic, the botnet targeted Xbox Live and Nuclear Fallout game-hosting servers. Nation-states were not aiming to hack the upcoming U.S. elections. Someone was trying to boot players off their game servers. Once again—just like MafiaBoy, VDoS, Paras, Dalton, and Josiah—the attacker was a teenage boy, this time a 15-year-old in Northern Ireland named Aaron Sterritt. Meanwhile, the Mirai trio left the DDoS business, just as Paras said. But Paras and Dalton did not give up on cybercrime. They just took up click fraud. Click fraud was more lucrative than running a booter service. While Mirai was no longer as big as it had been, the botnet could nevertheless generate significant advertising revenue. Paras and Dalton earned as much money in one month from click fraud as they ever made with DDoS. By January 2017, they had earned over US $180,000, as opposed to a mere $14,000 from DDoSing. Had Paras and his friends simply shut down their booter service and moved on to click fraud, the world would likely have forgotten about them. But by releasing the Mirai code, Paras created imitators. Dyn was the first major copycat attack, but many others followed. And due to the enormous damage these imitators wrought, law enforcement was intensely interested in the Mirai authors. After collecting information tying Paras, Josiah, and Dalton to Mirai, the FBI quietly brought each up to Alaska. Peterson’s team showed the suspects its evidence and gave them the chance to cooperate. Given that the evidence was irrefutable, each folded. Paras Jha was indicted twice, once in New Jersey for his attack on Rutgers, and once in Alaska for Mirai. Both indictments carried the same charge—one violation of the Computer Fraud and Abuse Act. Paras faced up to 10 years in federal prison for his actions. Josiah and Dalton were only indicted in Alaska and so faced 5 years in prison. The trio pled guilty. At the sentencing hearing held on 18 September 2018, in Anchorage, each of the defendants expressed remorse for his actions. Josiah White’s lawyer conveyed his client’s realization that Mirai was “a tremendous lapse in judgment.” Unlike Josiah, Paras spoke directly to Judge Timothy Burgess in the courtroom. Paras began by accepting full responsibility for his actions and expressed his deep regret for the trouble he’d caused his family. He also apologized for the harm he’d caused businesses and, in particular, Rutgers, the faculty, and his fellow students. The Department of Justice made the unusual decision not to ask for jail time. In its sentencing memo, the government noted “the divide between [the defendants’] online personas, where they were significant, well-known, and malicious actors in the DDoS criminal milieu and their comparatively mundane ‘real lives’ where they present as socially immature young men living with their parents in relative obscurity.” It recommended five years of probation and 2,500 hours of community service. The government had one more request —for that community service “to include continued work with the FBI on cybercrime and cybersecurity matters.” Even before sentencing, Paras, Josiah, and Dalton had logged close to 1,000 hours helping the FBI hunt and shut down Mirai copycats. They contributed to more than a dozen law enforcement and research efforts. In one instance, the trio assisted in stopping a nation-state hacking group. They also helped the FBI prevent DDoS attacks aimed at disrupting Christmas-holiday shopping. Judge Burgess accepted the government’s recommendation, and the trio escaped jail time. The most poignant moments in the hearing were Paras’s and Dalton’s singling out for praise the very person who caught them. “Two years ago, when I first met Special Agent Elliott Peterson,” Paras told the court, “I was an arrogant fool believing that somehow I was untouchable. When I met him in person for the second time, he told me something I will never forget: ‘You’re in a hole right now. It’s time you stop digging.’ ” Paras finished his remarks by thanking “my family, my friends, and Agent Peterson for helping me through this.” This article appears in the June 2023 print issue as “Patch Me if You Can.”
- This Stevens Institute of Technology Student Got a Head Start in Engineeringby Monica Rozenfeld on 22. Maja 2023. at 18:00
Many teenagers take a job at a restaurant or retail store, but Megan Dion got a head start on her engineering career. At 16, she landed a part-time position at FXB, a mechanical, electrical, and plumbing engineering company in Chadds Ford, Pa., where she helped create and optimize project designs. She continued to work at the company during her first year as an undergraduate at the Stevens Institute of Technology, in Hoboken, N.J., where she is studying electrical engineering with a concentration in power engineering. Now a junior, Dion is part of the five-year Stevens cooperative education program, which allows her to rotate three full-time work placements during the second quarter of the school year through August. She returns to school full time in September with a more impressive résumé. For her academic achievements, Dion received an IEEE Power & Energy Society scholarship and an IEEE PES Anne-Marie Sahazizian scholarship this year. The PES Scholarship Plus Initiative rewards undergraduates who one day are likely to build green technologies and change the way we generate and utilize power. Dion received US $2,000 from each scholarship toward her education. She says she’s looking forward to networking with other scholarship recipients and IEEE members. “Learning from other people’s stories and seeing myself in them and where my career could be in 10 or 15 years” motivates her, she says. Gaining hands-on experience in power engineering Dion’s early exposure to engineering came from her father, who owned a commercial electrical construction business for 20 years, and sparked her interest in the field. He would bring her along to meetings and teach her about the construction industry. Then she was able to gain on-the-job experience at FXB, where she quickly absorbed what she observed around her. “I would carry around a notebook everywhere I went, and I took notes on everything,” she says. “My team knew they never would have to explain something to me twice.” “If I’m going to do something, I’m going to do it the best I can.” She gained the trust of her colleagues, and they asked her to continue working with them while she attended college. She accepted the offer and supported a critical project at the firm: designing an underground power distribution and conduit system in the U.S. Virgin Islands to replace overhead power lines. The underground system could minimize power loss after hurricanes. Skilled in AutoCAD software, she contributed to the electrical design. Dion worked directly with the senior electrical designer and the president of the company, and she helped deliver status updates. The experience, she says, solidified her decision to become a power engineer. After completing her stint at FXB, she entered her first work placement through Stevens, which brought her to the Long Island Rail Road, in New York, through HNTB, an infrastructure design company in Kansas City, Mo. She completed an eight-month assignment at the LIRR, assisting the traction power and communications team in DC electrical system design for a major capacity improvement project for commuters in the New York metropolitan area. Working on a railroad job was out of her comfort zone, she says, but she was up for the challenge. “In my first meeting with the firm, I was in shock,” she says. “I was looking at train tracks and had to ask someone on the team to walk me through everything I needed to know, down to the basics.” Dion describes how they spent two hours going through each type of drawing produced, including third-rail sectionalizing, negative-return diagrams, and conduit routing. Each sheet included 15 to 30 meters of a 3.2-kilometer section of track. What Dion has appreciated most about the work placement program, she says, is learning about niche areas within power and electric engineering. She’s now at her second placement, at structural engineering company Thornton Tomasetti in New York City, where she is diving into forensic engineering. The role interests her because of its focus on investigating what went wrong when an engineering project failed. “My dad taught me to be 1 percent better each day.” “It’s a career path I had never known about before,” she says. Thornton Tomasetti investigates when something goes awry during the construction process, determines who is likely at fault, and provides expert testimony in court. Dion joined IEEE in 2020 to build her engineering network. She is preparing to graduate from Stevens next year, and then plans to pursue a master’s degree in electrical engineering while working full time. The importance of leadership and business skills To round out her experience and expertise in power and energy, Dion is taking business courses. She figures she might one day follow in her father’s entrepreneurial path. “My dad is my biggest supporter as well as my biggest challenger,” she says. “He will always ask me ‘Why?’ to challenge my thinking and help me be the best I can be. He’s taught me to be 1 percent better each day.” She adds that she can go to him whenever she has an engineering question, pulling from his decades of experience in the industry. Because of her background—growing up around the electrical industry—she has been less intimidated when she is the only woman in a meeting, she says. She finds that being a woman in a male-dominated industry is an opportunity, she says, adding that there is a lot of support and camaraderie among women in the field. While excelling academically, she is also a starter on the varsity volleyball team at Stevens. She has played the sport since she was in the seventh grade. Her athletic background has taught her important skills, she says, including how to lead by example and the importance of ensuring the entire team is supported and working well together. Dion’s competitive nature won’t allow her to hold herself back: “If I’m going to do something,” she says, “I’m going to do it the best I can.”
- Flat Lenses Made of Nanostructures Transform Tiny Cameras and Projectorsby Robert Gobron on 21. Maja 2023. at 15:00
Inside today’s computers, phones, and other mobile devices, more and more sensors, processors, and other electronics are fighting for space. Taking up a big part of this valuable real estate are the cameras—just about every gadget needs a camera, or two, three, or more. And the most space-consuming part of the camera is the lens. The lenses in our mobile devices typically collect and direct incoming light by refraction, using a curve in a transparent material, usually plastic, to bend the rays. So these lenses can’t shrink much more than they already have: To make a camera small, the lens must have a short focal length; but the shorter the focal length, the greater the curvature and therefore the thickness at the center. These highly curved lenses also suffer from all sorts of aberrations, so camera-module manufacturers use multiple lenses to compensate, adding to the camera’s bulk. With today’s lenses, the size of the camera and image quality are pulling in different directions. The only way to make lenses smaller and better is to replace refractive lenses with a different technology. That technology exists. It’s the metalens, a device developed at Harvard and commercialized at Metalenz, where I am an applications engineer. We create these devices using traditional semiconductor-processing techniques to build nanostructures onto a flat surface. These nanostructures use a phenomenon called metasurface optics to direct and focus light. These lenses can be extremely thin—a few hundred micrometers thick, about twice the thickness of a human hair. And we can combine the functionality of multiple curved lenses into just one of our devices, further addressing the space crunch and opening up the possibility of new uses for cameras in mobile devices. Centuries of lens alternatives Before I tell you how the metalens evolved and how it works, consider a few previous efforts to replace the traditional curved lens. Conceptually, any device that manipulates light does so by altering its three fundamental properties: phase, polarization, and intensity. The idea that any wave or wave field can be deconstructed down to these properties was proposed by Christiaan Huygens in 1678 and is a guiding principle in all of optics. In this single metalens [between tweezers], the pillars are less than 500 nanometers in diameter. The black box at the bottom left of the enlargement represents 2.5 micrometers. Metalenz In the early 18th century, the world’s most powerful economies placed great importance on the construction of lighthouses with larger and more powerful projection lenses to help protect their shipping interests. However, as these projection lenses grew larger, so did their weight. As a result, the physical size of a lens that could be raised to the top of a lighthouse and structurally supported placed limitations on the power of the beam that could be produced by the lighthouse. French physicist Augustin-Jean Fresnel realized that if he cut a lens into facets, much of the central thickness of the lens could be removed but still retain the same optical power. The Fresnel lens represented a major improvement in optical technology and is now used in a host of applications, including automotive headlights and brake lights, overhead projectors, and—still—for lighthouse projection lenses. However, the Fresnel lens has limitations. For one, the flat edges of facets become sources of stray light. For another, faceted surfaces are more difficult to manufacture and polish precisely than continuously curved ones are. It’s a no-go for camera lenses, due to the surface accuracy requirements needed to produce good images. Another approach, now widely used in 3D sensing and machine vision, traces its roots to one of the most famous experiments in modern physics: Thomas Young’s 1802 demonstration of diffraction. This experiment showed that light behaves like a wave, and when the waves meet, they can amplify or cancel one another depending on how far the waves have traveled. The so-called diffractive optical element (DOE) based on this phenomenon uses the wavelike properties of light to create an interference pattern—that is, alternating regions of dark and light, in the form of an array of dots, a grid, or any number of shapes. Today, many mobile devices use DOEs to convert a laser beam into “structured light.” This light pattern is projected, captured by an image sensor, then used by algorithms to create a 3D map of the scene. These tiny DOEs fit nicely into small gadgets, yet they can’t be used to create detailed images. So, again, applications are limited. Enter the metalens Enter the metalens. Developed at Harvard by a team led by professor Federico Capasso, then-graduate student Rob Devlin, research associates Reza Khorasaninejad, Wei Ting Chen, and others, metalenses work in a way that’s fundamentally different from any of these other approaches. A metalens is a flat glass surface with a semiconductor layer on top. Etched in the semiconductor is an array of pillars several hundred nanometers high. These nanopillars can manipulate light waves with a degree of control not possible with traditional refractive lenses. Imagine a shallow marsh filled with seagrass standing in water. An incoming wave causes the seagrass to sway back and forth, sending pollen flying off into the air. If you think of that incoming wave as light energy, and the nanopillars as the stalks of seagrass, you can picture how the properties of a nanopillar, including its height, thickness, and position next to other nanopillars, might change the distribution of light emerging from the lens. A 12-inch wafer can hold up to 10,000 metalenses, made using a single semiconductor layer.Metalenz We can use the ability of a metalens to redirect and change light in a number of ways. We can scatter and project light as a field of infrared dots. Invisible to the eye, these dots are used in many smart devices to measure distance, mapping a room or a face. We can sort light by its polarization (more on that in a moment). But probably the best way to explain how we are using these metasurfaces as a lens is by looking at the most familiar lens application—capturing an image. The process starts by illuminating a scene with a monochromatic light source—a laser. (While using a metalens to capture a full-color image is conceptually possible, that is still a lab experiment and far from commercialization.) The objects in the scene bounce the light all over the place. Some of this light comes back toward the metalens, which is pointed, pillars out, toward the scene. These returning photons hit the tops of the pillars and transfer their energy into vibrations. The vibrations—called plasmons—travel down the pillars. When that energy reaches the bottom of a pillar, it exits as photons, which can be then captured by an image sensor. Those photons don’t need to have the same properties as those that entered the pillars; we can change these properties by the way we design and distribute the pillars. From concept to commercialization Researchers around the world have been exploring the concept of metalenses for decades. In a paper published in 1968 in Soviet Physics Uspekhi, Russian physicist Victor Veselago put the idea of metamaterials on the map, hypothesizing that nothing precluded the existence of a material that exhibits a negative index of refraction. Such a material would interact with light very differently than a normal material would. Where light ordinarily bounces off a material in the form of reflection, it would pass around this type of metamaterial like water going around a boulder in a stream. It took until 2000 before the theory of metamaterials was implemented in the lab. That year, Richard A. Shelby and colleagues at the University of California, San Diego, demonstrated a negative refractive index metamaterial in the microwave region. They published the discovery in 2001 in Science, causing a stir as people imagined invisibility cloaks. (While intriguing to ponder, creating such a device would require precisely manufacturing and assembling thousands of metasurfaces.) The first metalens to create high-quality images with visible light came out of Federico Capasso’s lab at Harvard. Demonstrated in 2016, with a description of the research published in Science, the technology immediately drew interest from smartphone manufacturers. Harvard then licensed the foundational intellectual property exclusively to Metalenz, where it has now been commercialized. A single metalens [right] can replace a stack of traditional lenses [left], simplifying manufacturing and dramatically reducing the size of a lens package.Metalenz Since then, researchers at Columbia University, Caltech, and the University of Washington, working with Tsinghua University, in Beijing, have also demonstrated the technology. Much of the development work Metalenz does involves fine-tuning the way the devices are designed. In order to translate image features like resolution into nanoscale patterns, we developed tools to help calculate the way light waves interact with materials. We then convert those calculations into design files that can be used with standard semiconductor processing equipment. The first wave of optical metasurfaces to make their way into mobile imaging systems have on the order of 10 million silicon pillars on a single flat surface only a few millimeters square, with each pillar precisely tuned to accept the correct phase of light, a painstaking process even with the help of advanced software. Future generations of the metalens won’t necessarily have more pillars, but they’ll likely have more sophisticated geometries, like sloped edges or asymmetric shapes. Metalenses migrate to smartphones Metalenz came out of stealth mode in 2021, announcing that it was getting ready to scale up production of devices. Manufacturing was not as big a challenge as design because the company manufactures metasurfaces using the same materials, lithography, and etching processes that it uses to make integrated circuits. In fact, metalenses are less demanding to manufacture than even a very simple microchip because they require only a single lithography mask as opposed to the dozens required by a microprocessor. That makes them less prone to defects and less expensive. Moreover, the size of the features on an optical metasurface are measured in hundreds of nanometers, whereas foundries are accustomed to making chips with features that are smaller than 10 nanometers. And, unlike plastic lenses, metalenses can be made in the same foundries that produce the other chips destined for smartphones. This means they could be directly integrated with the CMOS camera chips on site rather than having to be shipped to another location, which reduces their costs still further. A single meta-optic, in combination with an array of laser emitters, can be used to create the type of high-contrast, near-infrared dot or line pattern used in 3D sensing. Metalenz In 2022, ST Microelectronics announced the integration of Metalenz’s metasurface technology into its FlightSense modules. Previous generations of FlightSense have been used in more than 150 models of smartphones, drones, robots, and vehicles to detect distance. Such products with Metalenz technology inside are already in consumer hands, though ST Microelectronics isn’t releasing specifics. Indeed, distance sensing is a sweet spot for the current generation of metalens technology, which operates at near-infrared wavelengths. For this application, many consumer electronics companies use a time-of-flight system, which has two optical components: one that transmits light and one that receives it. The transmitting optics are more complicated. These involve multiple lenses that collect light from a laser and transform it to parallel light waves—or, as optical engineers call it, a collimated beam. These also require a diffraction grating that turns the collimated beam into a field of dots. A single metalens can replace all of those transmitting and receiving optics, saving real estate within the device as well as reducing cost. And a metalens does the field-of-dots job better in difficult lighting conditions because it can illuminate a broader area using less power than a traditional lens, directing more of the light to where you want it. The future is polarized Conventional imaging systems, at best, gather information only about the spatial position of objects and their color and brightness.But the light carries another type of information: the orientation of the light waves as they travel through space—that is, the polarization. Future metalens applications will take advantage of the technology’s ability to detect polarized light. The polarization of light reflecting off an object conveys all sorts of information about that object, including surface texture, type of surface material, and how deeply light penetrates the material before bouncing back to the sensor. Prior to the development of the metalens, a machine vision system would require complex optomechanical subsystems to gather polarization information. These typically rotate a polarizer—structured like a fence to allow only waves oriented at a certain angle to pass through—in front of a sensor. They then monitor how the angle of rotation impacts the amount of light hitting the sensor. Metasurface optics are capable of capturing polarization information from light, revealing a material’s characteristics and providing depth information.Metalenz A metalens, by contrast, doesn’t need a fence; all the incoming light comes through. Then it can be redirected to specific regions of the image sensor based on its polarization state, using a single optical element. If, for example, light is polarized along the X axis, the nanostructures of the metasurface will direct the light to one section of the image sensor. However, if it is polarized at 45 degrees to the X axis, the light will be directed to a different section. Then software can reconstruct the image with information about all its polarization states. Using this technology, we can replace previously large and expensive laboratory equipment with tiny polarization-analysis devices incorporated into smartphones, cars, and even augmented-reality glasses. A smartphone-based polarimeter could let you determine whether a stone in a ring is diamond or glass, whether concrete is cured or needs more time, or whether an expensive hockey stick is worth buying or contains micro cracks. Miniaturized polarimeters could be used to determine whether a bridge’s support beam is at risk of failure, whether a patch on the road is black ice or just wet, or if a patch of green is really a bush or a painted surface being used to hide a tank. These devices could also help enable spoof-proof facial identification, since light reflects off a 2D photo of a person at different angles than a 3D face and from a silicone mask differently than it does from skin. Handheld polarizers could improve remote medical diagnostics—for example, polarization is used in oncology to examine tissue changes. But like the smartphone itself, it’s hard to predict where metalenses will take us. When Apple introduced the iPhone in 2008, no one could have predicted that it would spawn companies like Uber. In the same way, perhaps the most exciting applications of metalenses are ones we can’t even imagine yet.
- Video Friday: Lunar Transportby Evan Ackerman on 19. Maja 2023. at 18:30
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ICRA 2023: 29 May–2 June 2023, LONDON Energy Drone & Robotics Summit: 10–12 June 2023, HOUSTON RoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCE RSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREA IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREA IROS 2023: 1–5 October 2023, DETROIT CLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZIL Humanoids 2023: 12–14 December 2023, AUSTIN, TEXAS Enjoy today’s videos! LATTICE is an undergrad project from Caltech that’s developing a modular robotic transportation system for the lunar surface that uses autonomous rovers to set up a sort of cable car system to haul things like ice out of deep craters to someplace more useful. The prototype is fully functional, and pretty cool to watch in action. We’re told that the team will be targeting a full system demonstration deploying across a “crater” on Earth this time next year. As to what those quotes around “crater” mean, your guess is as good as mine. [ Caltech ] Thanks, Lucas! Happy World Cocktail Day from Flexiv! [ Flexiv ] Here’s what Optimus has been up to lately. As per usual, the robot is moderately interesting, but it’s probably best to mostly just ignore Musk. [ Tesla ] The INSECT tarsus-inspired compliant robotic grippER with soft adhesive pads (INSECTER) uses only one single electric actuator with a cable-driven mechanism. It can be easily controlled to perform a gripping motion akin to an insect tarsus (i.e., wrapping around the object) for handling various objects. [ Paper ] Thanks, Poramate! Congratulations to ANYbotics on their $50 million Series B! And from 10 years ago (!) at ICRA 2013, here is video I took of StarlETH, one of ANYmal’s ancestors. [ ANYbotics ] In this video we present results from the recent field-testing campaign of the DigiForest project at Evo, Finland. The DigiForest project started in September 2022 and runs up to February 2026. It brings together diverse partners working on aerial robots, walking robots, autonomous lightweight harvesters, as well as forestry decision makers and commercial companies with the goal to create a full data pipeline for digitized forestry. [ DigiForest ] The Robotics and Perception Group at UZH will be presenting some new work on agile autonomous high-speed flight through cluttered environments at ICRA 2023. [ Paper ] Robots who lift together, stay together. [ Sanctuary AI ] The next CYBATHLON competition, which will take place again in 2024, breaks down barriers between the public, people with disabilities, researchers and technology developers. The initiative promotes the inclusion and participation of people with disabilities and improves assistance systems for use in everyday life by the end users. [ Cybathlon ]
- IEEE Celebrates Engineering Pioneers and Emerging Technologiesby Joanna Goodrich on 19. Maja 2023. at 18:00
IEEE’s Vision, Innovation, and Challenges Summit and Honors Ceremony showcases emerging technologies and celebrates engineering pioneers who laid the groundwork for many of today’s electronic devices. I attended this year’s events, held on 4 and 5 May in Atlanta. Here are highlights of the sessions, which are available on IEEE.tv. The summit kicked off on 4 May at the Georgia Aquarium with a reception and panel discussion on climate change and sustainability, moderated by Saifur Rahman, IEEE president and CEO. The panel featured Chris Coco, IEEE Fellow Alberto Moreira, and IEEE Member Jairo Garcia. Coco is senior director for aquatic sustainability at the aquarium. Moreira is director of the German Aerospace Center Microwaves and Radar Institute in Oberpfaffenhofen, Bavaria. Garcia is CEO of Urban Climate Nexus in Atlanta. UCN assists U.S. cities in creating and executing climate action and resilience plans. The panelists focused on how the climate crisis is affecting the ocean and ways technology is helping to track environmental changes. Coco said one of the biggest challenges facilities such as his are facing is finding enough food for their animals. Because sea levels and temperatures are rising, more than 80 percent of marine life is migrating toward the Earth’s poles and away from warmer water, he said. With fish and other species moving to new habitats, ocean predators that rely on them for food are following them. This migration is making it more difficult to find food for aquarium fish, Coco said. He added that technology on buoys is monitoring the water’s quality, and temperature, and levels. Moreira, recipient of this year’s IEEE Dennis J. Picard Medal for Radar Technologies and Applications, developed a space-based synthetic aperture radar system that can monitor the Earth’s health. The system, consisting of two orbiting satellites, generates 3D maps of the planet’s surface with 2-meter accuracy and lets researchers track sea levels and deforestation. Policymakers can use the data, Moreira said, to mitigate the impact or adapt to the changes. Those who developed technologies that changed people’s lives were recognized at the 2023 Honor Ceremony in Atlanta.Robb Cohen Photography & Video Bridging the digital divide, ethics in AI, and the role of robotics The IEEE Vision, Innovation, and Challenges Summit got underway on 5 May at the Hilton Atlanta, featuring panel discussions with several of this year’s award recipients about concerns related to information and communication technology (ICT), career advice, and artificial intelligence. The event kicked off with a “fireside chat” between Vint Cerf and Katie Hafner, a technology journalist. Cerf, widely known as the “Father of the Internet,” is the recipient of this year’s IEEE Medal of Honor. He is being recognized for helping to create “the Internet architecture and providing sustained leadership in its phenomenal growth in becoming society’s critical infrastructure.” Reflecting on his career, Cerf said “the most magical thing that came out of the Internet is the collection of people that came together to design, build, and get the Internet to work.” The IEEE Life Fellow also spoke about the biggest challenges society faces today with ICT, including the digital divide and people using the Internet maliciously. “I don’t want anyone to be denied access to the Internet, whether it’s because they don’t have physical access or can’t afford the service,” Cerf said. “We’re seeing a rapid increase in access recently, and I’m sure before the end of this decade anyone who wants access will have it.” But, he added, “People are doing harmful things on the Internet to other people, such as ransomware, malware, and disinformation. I’m not surprised this is happening. It’s human frailty being replicated in the online environment. The hard part is figuring out what to do about it.” During the Innovators Showcase session, panelists Luc Van den hove, IEEE Life Fellow Melba M. Crawford, and IEEE Fellow James Truchard offered advice on how to lead a successful company or research lab. They agreed that it’s important to bring together people from multiple disciplines and to ensure the market is ready for the product in development. As for moving up the career ladder, Truchard said people should not exclude the role of luck. “Engineering changes the way the world works.” “Nothing beats dumb luck,” he said, laughing. He is a former president and CEO of National Instruments, an engineering-solutions company he helped found in Austin, Texas. He is the recipient of the IEEE James H. Mulligan Jr. Education Medal. With the launch of ChatGPT, generative AI has become a hot topic among technologists. The “Artificial Intelligence and ChatGPT” panel focused on the ethics of generative AI and how educators can adapt the tools in classrooms. The panelists—IEEE Senior Member Carlotta Berry, IEEE Fellow Lydia E. Kavraki, and IEEE Life Fellow Rodney Brooks—also touched on what applications robots could benefit in the future. The three have robotics backgrounds. They agreed that when an image or text was created using generative AI, that fact needs to be made clear, especially on social media platforms. One way to accomplish that, Berry said, is to implement policies that require documentation. Berry, a professor of electrical and computer engineering at the Rose-Hulman Institute of Technology, in Terre Haute, Ind., emphasized how gender and racial biases remain problems with AI. Because schools won’t be able to stop students from using tools such as ChatGPT, she said, educators need to teach them how to analyze data and how to tell whether a source is valid. Berry is the recipient of the IEEE Undergraduate Teaching Award. Brooks, an MIT robotics professor and cofounder of iRobot, said robots can help mitigate the effects of climate change and could help in caring for the elderly. “We aren’t going to have enough people to look after them,” he said, “and it’s going to be a real problem fairly soon. We need to find a way to help the aging population maintain independence and dignity.” Brooks is the recipient of the IEEE Founders Medal. AI and robots can be used to monitor the health of the Earth, remove pollutants from water and soil, and better understand viruses such as the one that causes COVID-19, Kavraki said. The IEEE Fellow, a computer science professor at Rice University, in Houston, is the recipient of the IEEE Frances E. Allen Medal. Pioneers of the QR code, the cochlear implant, and the Internet The evening’s Honor Ceremony recognized those who developed technologies that changed people’s lives, including the QR code, cochlear implants, and the Internet. The IEEE Corporate Innovation Award went to Japanese automotive manufacturer Denso, located in Aichi, for “the innovation of QR (Quick Response) code and their widespread use across the globe.” The company’s CEO, Koji Arima, accepted the award. In his speech, the IEEE member said Denso is “committed to developing technology that makes people happy.” About 466 million people have hearing loss, according to the World Health Organization. To help those who are hearing impaired, in the 1970s husband and wife Erwin and Ingeborg Hochmair developed the multichannel cochlear implant. For their invention, the duo are the recipients of the IEEE Alexander Graham Bell Medal. “We hope to continue IEEE’s mission of developing technology for the benefit of humanity,” Ingeborg, an IEEE senior member, said in her acceptance speech. The ceremony ended with the presentation of the IEEE Medal of Honor to Cerf, who received a standing ovation. “Engineering changes the way the world works,” he said. He ended with a promise: “You ain’t seen nothing yet.” You can watch the IEEE Awards Ceremony on IEEE.tv.
- Satellite Signal Jamming Reaches New Lowsby Lucas Laursen on 18. Maja 2023. at 17:13
Russia’s invasion of Ukraine in 2022 put Ukrainian communications in a literal jam: Just before the invasion, Russian hackers knocked out Viasat satellite ground receivers across Europe. Then entrepreneur Elon Musk swept in to offer access to Starlink, SpaceX’s growing network of low Earth orbit (LEO) communications satellites. Musk soon reported that Starlink was suffering from jamming attacks and software countermeasures. In March, the U.S. Department of Defense (DOD) concluded that Russia was still trying to jam Starlink, according to documents leaked by U.S. National Guard airman Ryan Teixeira and seen by the Washington Post. Ukrainian troops have likewise blamed problems with Starlink on Russian jamming, the website Defense One reports. If Russia is jamming a LEO constellation, it would be a new layer in the silent war in space-ground communications. “There is really not a lot of information out there on this,” says Brian Weeden, the director of program planning for the Secure World Foundation, a nongovernmental organization that studies space governance. But, Weeden adds, “my sense is that it’s much harder to jam or interfere with Starlink [than with GPS satellites].” LEO Satellites Face New Security Risks Regardless of their altitude or size, communications satellites transmit more power and therefore require more power to jam than navigational satellites. However, compared with large geostationary satellites, LEO satellites—which orbit Earth at an altitude of 2,000 kilometers or lower—have frequent handovers that “introduce delays and opens up more surface for interference,” says Mark Manulis, a professor of privacy and applied cryptography at the University of the Federal Armed Forces’ Cyber Defense Research Institute (CODE) in Munich, Germany. Security and communications researchers are working on defenses and countermeasures, mostly behind closed doors, but it is possible to infer from a few publications and open-source research how unprepared many LEO satellites are for direct attacks and some of the defenses that future LEO satellites may need. For years, both private companies and government agencies have been planning LEO constellations, each numbering thousands of satellites. The DOD, for example, has been designing its own LEO satellite network to supplement its more traditional geostationary constellations for more than a decade and has already begun issuing contracts for the constellation’s construction. University research groups are also launching tiny, standardized cube satellites (CubeSats) into LEO for research and demonstration purposes. This proliferation of satellite constellations coincides with the emergence of off-the-shelf components and software-defined radio—both of which make the satellites more affordable, but perhaps less secure. Russia’s defense agencies commissioned a system called Tobol that’s designed to counter jammers that might interfere with their own satellites, reported journalist and author Bart Hendrickx. That implies that Russia either can transmit jamming signals up to satellites, or suspects that adversaries can. Many of the agencies and organizations launching the latest generation of low-cost satellites haven’t addressed the biggest security issues they face, researchers wrote in one review of LEO security in 2022. That may be because one of the temptations of LEO is the ability of relatively cheap new hardware to do smaller jobs. “Satellites are becoming smaller. They are very purpose-specific,” says Ijaz Ahmad, a telecoms security researcher at the VTT Technical Research Centre in Espoo, Finland. “They have less resources for computing, processing, and also memory.” Less computing power means fewer encryption capabilities, as well as less ability to detect and respond to jamming or other active interference. The rise of software-defined radio (SDR) has also made it easier to get hardware to accomplish new things, including allowing small satellites to cover many frequency bands. “When you make it programmable, you provide that hardware with some sort of remote connectivity so you can program it. But if the security side is overlooked, it will have severe consequences,” Ahmad says. “At the moment there are no good standards focused on communications for LEO satellites.”—Mark Manulis, professor of privacy and applied cryptography, University of the Federal Armed Forces Among those consequences are organized criminal groups hacking and extorting satellite operators or selling information they have captured. One response to the risks of software-defined radio and the fact that modern low-cost satellites require firmware updates is to include some simple physical security. Starlink did not respond to requests for comments on its security, but multiple independent researchers said they doubt today’s commercial satellites match military-grade satellite security countermeasures, or even meet the same standards as terrestrial communications networks. Of course, physical security can be defeated with a physical attack, and state actors have satellites capable of changing their orbits and grappling with, and thus perhaps physically hacking, communications satellites, the Secure World Foundation stated in an April report. LEO Satellites Need More Focus on Cryptography, Hardware Despite that vulnerability, LEO satellites do bring certain advantages in a conflict: There are more of them, and they cost less per satellite. Attacking or destroying a satellite “might have been useful against an adversary who only has a few high-value satellites, but if the adversary has hundreds or thousands, then it’s a lot less of an impact,” Weeden says. LEO also offers a new option: sending a message to multiple satellites for later confirmation. That wasn’t possible when only a handful of GEO satellites covered Earth, but it is a way for cooperating transmitters and receivers to ensure that a message gets through intact. According to a 2021 talk by Vijitha Weerackody, a communications engineer at Johns Hopkins University, as few as three LEO satellites may be enough for such cooperation. Even working together, future LEO constellation designers may need to respond with improved antennas, radio strategies that include spread spectrum modulation, and both temporal and transform-domain adaptive filtering. These strategies come at a cost to data transmission and complexity. But such measures may still be defeated by a strong enough signal that covers the satellite’s entire bandwidth and saturates its electronics. “There’s a need to introduce a strong cryptographic layer,” says Manulis. “At the moment there are no good standards focused on communications for LEO satellites. Governments should push for standards in that area relying on cryptography.” The U.S. National Institute of Standards and Technology does have draft guidelines for commercial satellite cybersecurity that satellite operator OneWeb took into account when designing its LEO constellation, says OneWeb principal cloud-security architect Wendy Ng: “Hats off to them, they do a lot of work speaking to different vendors and organizations to make sure they’re doing the right thing.” OneWeb uses encryption in its control channels, something a surprising number of satellite operators fail to do, says Johannes Willbold, a doctoral student at Ruhr University, in Bochum, Germany. Willbold is presenting his analysis of three research satellites’ security on 22 May 2023 at the IEEE Symposium on Security and Privacy. “A lot of satellites had straight-up no security measures to protect access in the first place,” he says. Securing the growing constellations of LEO satellites matters to troops in trenches, investors in any space endeavor, anyone traveling into Earth orbit or beyond, and everyone on Earth who uses satellites to navigate or communicate. “I’m hoping there will be more initiatives where we can come together and share best practices and resources,” says OneWeb’s Ng. Willbold, who cofounded an academic workshop on satellite security, is optimistic that there will be: “It’s surprising to me how many people are now in the field, and how many papers they submitted.”
- This Engineer Promotes Innovation-Based Projects in Ugandaby Joanna Goodrich on 17. Maja 2023. at 20:00
Ever since Lwanga Herbert was a youngster growing up in Kampala, Uganda, he wanted to create technology to improve his community. While attending a vocational school, he participated in a project that sought technological solutions for local electricians who were having problems troubleshooting systems. Herbert helped develop a detector to measure voltage levels in analog electronics; a pulse detector to identify digital pulses and signals; and a proximity alarm system. The tools he helped develop made troubleshooting easier and faster for the electricians. When he understood the impact his work had, he was inspired to pursue engineering as a career. “I saw firsthand that technology increases the speed, efficiency, and effectiveness of solving challenges communities face,” he says. The devices were recognized by the Uganda National Council for Science and Technology. The level and pulse detectors were registered as intellectual property through the African Regional Intellectual Property Organization. Herbert now works to use technology to address challenges faced by Uganda as a whole, such as high neonatal death rates. The IEEE member is the innovation director at the Log’el Science Foundation. The nonprofit, which was launched in 2001, works to foster technological development in Uganda. It strives to enable a more competitive job market by helping startups succeed and by sparking interest in science, technology, engineering, and math careers. Herbert has been active with IEEE’s humanitarian programs and is chair of the newly established IEEE Humanitarian Technology Board. HTB will oversee and support all humanitarian activities across IEEE and is responsible for fostering new collaborations. It also will fund related projects and activities. Because of his busy schedule, The Institute conducted this interview via email. We asked him about the goals of the Log’el Science Foundation, his humanitarian work, and how his IEEE membership has advanced his career. His answers have been edited for clarity. The Institute: What are you working on at the foundation? Lwanga Herbert: The foundation has four main projects: an incubation program; STEM education outreach; internship opportunities for both undergraduate and graduate students; and entrepreneurship development. The incubation program assists technology startups during their vulnerable inception stages, enabling them to grow and flourish. The objective is to encourage and promote innovation-based entrepreneurship by providing assistance and support such as mentorship, connecting participants to business and technical institutions, and facilitating courses on a range of technology and management topics. The STEM education program engages youth across the country by arranging professional engineers to talk to students in primary school, high school, and college about their work. This greatly inspires and motivates them to embrace a career in STEM. The goal of connecting students to internships is to help them put the theoretical knowledge they learned at school into practice. The program helps prepare young learners for the workplace and provides them with career development opportunities. The entrepreneurship program’s goal is to instill the culture of entrepreneurship into the mindset of young people. [The program teaches business skills and holds competitions.] The Log’el Science Foundation hopes this leads to the creation of rich and creative business, scientific, technological, agricultural, and production operations in Uganda. What kind of impact have you seen from the programs? Herbert: They have enabled students to secure employment much faster than before and allowed their self-confidence to rise. Because they have more self-confidence, students have been able to start and operate successful business ventures. The outreach programs also enable young learners to develop and strengthen their interests in STEM-related career paths. What challenges have you faced at your job, and how did you overcome them? Herbert: One of the key challenges is that the innovation process takes time to produce results, and therefore I need a lot of patience and sustained focus. I always remind myself to have hope, commitment, and passion when dealing with the process. Another challenge is making sure I stay inspired and motivated. Working in a non-inspiring and non-motivating society can bring down an innovator’s self-confidence and sense of direction. I have found that networking with a wide variety of people can help keep my morale up. Is there a humanitarian effort you’ve been a part of that stands out? Herbert: I led an IEEE Humanitarian Activities Committee-supported project in 2019 that aimed to reduce neonatal death rates and injuries among newborn babies in Uganda. There are considerable gaps in neonatal health care because of understaffing and a lack of functional medical equipment. Many neonatal deaths can be prevented with proper equipment. Both IEEE programs collaborated with Neopenda, a health tech startup founded in 2015 that designs and manufactures wearables. The device we developed monitored four major vital signs of a newborn: heart rate, respiration, blood oxygen saturation, and temperature. If any abnormalities in the vital signs are identified they can be corrected accordingly, and in a timely manner, and thereby [help] prevent ill health or even death. When did you join IEEE and why? How has being a member benefited your career? Herbert: I joined in 2009 when I was a student at Kyambogo University in Kampala, because of its collaborative environment, global membership, and humanitarian efforts. As an IEEE member I have been able to improve my professional skills by learning how to be a team player, understand market needs, and view challenges as opportunities and develop solutions to those challenges. It has also provided me with opportunities to contribute my knowledge to the technological community and learn how to work with people across the globe. Why is the formation of the HTB important for IEEE? Herbert: The elevation of what was previously the IEEE Humanitarian Activities Committee to the new HTB reflects the growing numbers of IEEE Special Interest Group on Humanitarian Technology (SIGHT) membership, project proposals, and funded teams. It also reflects the fact that 30 percent of all active IEEE members, and 60 percent of active IEEE student members, indicate an interest in the organization’s humanitarian programs when they join IEEE or renew their annual membership. It demonstrates the support of IEEE leaders, who have provided us with the structure to expand our role in supporting humanitarian technology activities across IEEE. Now we are poised to unite efforts, share best practices, and better capture the entire story of humanitarian technology at IEEE. We can use that to play a more coordinated role in the global humanitarian technology space with the ultimate goal of more effectively helping the world. What are your goals as the first chair of HTB? Herbert: Some of HTB’s goals this year include strengthening and expanding partnerships and collaborations with IEEE entities; enhancing support for humanitarian technologies and sustainable development activities; facilitating capacity building so IEEE members can access more educational resources and opportunities in the area of humanitarian technology and sustainable development; and creating awareness to increase the understanding of the role of engineering and technology in sustainable development. Earlier this year HTB held a call for proposals in collaboration with IEEE SIGHT for IEEE member grassroots projects that utilize technology to address pressing needs of the members’ local communities. For the first time, the areas of technical interest included sustainable development. The call for proposals also sought projects that use existing technologies to help solve challenges faced by people with disabilities or collaborate with local organizations that serve people with disabilities.Serving as the first chair of HTB with its expanded role and responsibilities sounds like a daunting task as there is a lot to be done. The good news is that HTB is building upon the solid foundation and benefits from new board members who represent the Member and Geographic Activities Board, Technical Activities Board, Educational Activities Board, Standards Association Board, and IEEE Young Professionals. With this team, I feel strongly that we can accomplish HTB’s mission, yearly goals, and continue to make a lasting impact.
- Just Calm Down About GPT-4 Alreadyby Glenn Zorpette on 17. Maja 2023. at 15:00
Rapid and pivotal advances in technology have a way of unsettling people, because they can reverberate mercilessly, sometimes, through business, employment, and cultural spheres. And so it is with the current shock and awe over large language models, such as GPT-4 from OpenAI. It’s a textbook example of the mixture of amazement and, especially, anxiety that often accompanies a tech triumph. And we’ve been here many times, says Rodney Brooks. Best known as a robotics researcher, academic, and entrepreneur, Brooks is also an authority on AI: he directed the Computer Science and Artificial Intelligence Laboratory at MIT until 2007, and held faculty positions at Carnegie Mellon and Stanford before that. Brooks, who is now working on his third robotics startup, Robust.AI, has written hundreds of articles and half a dozen books and was featured in the motion picture Fast, Cheap & Out of Control. He is a rare technical leader who has had a stellar career in business and in academia and has still found time to engage with the popular culture through books, popular articles, TED Talks, and other venues. “It gives an answer with complete confidence, and I sort of believe it. And half the time, it’s completely wrong.”—Rodney Brooks, Robust.AI IEEE Spectrum caught up with Brooks at the recent Vision, Innovation, and Challenges Summit, where he was being honored with the 2023 IEEE Founders Medal. He spoke about this moment in AI, which he doesn’t regard with as much apprehension as some of his peers, and about his latest startup, which is working on robots for medium-size warehouses. Rodney Brooks on… Will GPT-4 and other large language models lead to an artificial general intelligence in the foreseeable future? Will companies marketing large language models ever justify the enormous valuations some of these companies are now enjoying? When are we going to have full (level-5) self-driving cars? What are the most attractive opportunities now in warehouse robotics? You wrote a famous article in 2017, “The Seven Deadly Sins of AI Prediction.“ You said then that you wanted an artificial general intelligence to exist—in fact, you said it had always been your personal motivation for working in robotics and AI. But you also said that AGI research wasn’t doing very well at that time at solving the basic problems that had remained intractable for 50 years. My impression now is that you do not think the emergence of GPT-4 and other large language models means that an AGI will be possible within a decade or so. Rodney Brooks: You’re exactly right. And by the way, GPT-3.5 guessed right—I asked it about me, and it said I was a skeptic about it. But that doesn’t make it an AGI. The large language models are a little surprising. I’ll give you that. And I think what they say, interestingly, is how much of our language is very much rote, R-O-T-E, rather than generated directly, because it can be collapsed down to this set of parameters. But in that “Seven Deadly Sins” article, I said that one of the deadly sins was how we humans mistake performance for competence. If I can just expand on that a little. When we see a person with some level performance at some intellectual thing, like describing what’s in a picture, for instance, from that performance, we can generalize about their competence in the area they’re talking about. And we’re really good at that. Evolutionarily, it’s something that we ought to be able to do. We see a person do something, and we know what else they can do, and we can make a judgement quickly. But our models for generalizing from a performance to a competence don’t apply to AI systems. The example I used at the time was, I think it was a Google program labeling an image of people playing Frisbee in the park. And if a person says, “Oh, that’s a person playing Frisbee in the park,” you would assume you could ask him a question, like, “Can you eat a Frisbee?” And they would know, of course not; it’s made of plastic. You’d just expect they’d have that competence. That they would know the answer to the question, “Can you play Frisbee in a snowstorm? Or, how far can a person throw a Frisbee? Can they throw it 10 miles? Can they only throw it 10 centimeters?” You’d expect all that competence from that one piece of performance: a person saying, “That’s a picture of people playing Frisbee in the park.” “What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be.”—Rodney Brooks, Robust.AI We don’t get that same level of competence from the performance of a large language model. When you poke it, you find that it doesn’t have the logical inference that it may have seemed to have in its first answer. I’ve been using large language models for the last few weeks to help me with the really arcane coding that I do, and they’re much better than a search engine. And no doubt, that’s because it’s 4,000 parameters or tokens. Or 60,000 tokens. So it’s a lot better than just a 10-word Google search. More context. So when I’m doing something very arcane, it gives me stuff. But what I keep having to do, and I keep making this mistake—it answers with such confidence any question I ask. It gives an answer with complete confidence, and I sort of believe it. And half the time, it’s completely wrong. And I spend 2 or 3 hours using that hint, and then I say, “That didn’t work,” and it just does this other thing. Now, that’s not the same as intelligence. It’s not the same as interacting. It’s looking it up. It sounds like you don’t think GPT-5 or GPT-6 is going to make a lot of progress on these issues. Brooks: No, because it doesn’t have any underlying model of the world. It doesn’t have any connection to the world. It is correlation between language. By the way, I recommend a long blog post by Stephen Wolfram. He’s also turned it into a book. I’ve read it. It’s superb. Brooks: It gives a really good technical understanding. What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be. Not long after ChatGPT and GPT-3.5 went viral last January, OpenAI was reportedly considering a tender offer that valued the company at almost $30 billion. Indeed, Microsoft invested an amount that has been reported as $10 billion. Do you think we’re ever going to see anything come out of this application that will justify these kind of numbers? Brooks: Probably not. My understanding is that Microsoft’s initial investment was in time on the cloud computing rather than hard, cold cash. OpenAI certainly needed [cloud computing time] to build these models because they’re enormously expensive in terms of the computing needed. I think what we’re going to see—and I’ve seen a bunch of papers recently about boxing in large language models—is much smoother language interfaces, input and output. But you have to box things in carefully so that the craziness doesn’t come out, and the making stuff up doesn’t come out. “I think they’re going to be better than the Watson Jeopardy! program, which IBM said, ‘It’s going to solve medicine.’ Didn’t at all. It was a total flop. I think it’s going to be better than that.”—Rodney Brooks, Robust.AI So you’ve got to box things in because it’s not a database. It just makes up stuff that sounds good. But if you box it in, you can get really much better language than we’ve had before. So when the smoke clears, do you think we’ll have major applications? I mean, putting aside the question of whether they justify the investments or the valuations, is it going to still make a mark? Brooks: I think it’s going to be another thing that’s useful. It’s going to be better language input and output. Because of the large numbers of tokens that get buffered up, you get much better context. But you have to box it so much…I am starting to see papers, how to put this other stuff on top of the language model. And sometimes it’s traditional AI methods, which everyone had sort of forgotten about, but now they’re coming back as a way of boxing it in. I wrote a list of about 30 or 40 events like this over the last 50 years where, it was going to be the next big thing. And many of them have turned out to be utter duds. They’re useful, like the chess-playing programs in the ’90s. That was supposed to be the end of humans playing chess. No, it wasn’t the end of humans playing chess. Chess is a different game now and that’s interesting. But just to articulate where I think the large language models come in: I think they’re going to be better than the Watson Jeopardy! program, which IBM said, “It’s going to solve medicine.” Didn’t at all. It was a total flop. I think it’s going to be better than that. But not AGI. “A very famous senior person said, ‘Radiologists will be out of business before long.’ And people stopped enrolling in radiology specialties, and now there’s a shortage of them.”—Rodney Brooks, Robust.AI So what about these predictions that entire classes of employment will go away, paralegals, and so on? Is that a legitimate concern? Brooks: You certainly hear these things. I was reviewing a government report a few weeks ago, and it said, “Lawyers are going to disappear in 10 years.” So I tracked it down and it was one barrister in England, who knew nothing about AI. He said, “Surely, if it’s this good, it’s going to get so much better that we’ll be out of jobs in 10 years.” There’s a lot of disaster hype. Someone suggests something and it gets amplified. We saw that with radiologists. A very famous senior person said, “Radiologists will be out of business before long.” And people stopped enrolling in radiology specialties and now there’s a shortage of them. Same with truck driving…. There are so many ads from all these companies recruiting truck drivers because there’s not enough truck drivers, because three or four years ago, people were saying, “Truck driving is going to go away.” In fact, six or seven years ago, there were predictions that we would have fully self-driving cars by now. Brooks: Lots of predictions. CEOs of major auto companies were all saying by 2020 or 2021 or 2022, roughly. Full self-driving, or level 5, still seems really far away. Or am I missing something? Brooks: No. It is far away. I think the level-2 and level-3 stuff in cars is amazingly good now. If you get a brand-new car and pay good money for it, it’s pretty amazingly good. The level 5, or even level 4, not so much. I live in the city of San Francisco, and for almost a year now, I’ve been able to take rides after 10:30 p.m. and before 5:00 a.m., if it’s not a foggy day—I’ve been able to take rides in a Cruise vehicle with no driver. Just in the last few weeks, Cruise and Waymo got an agreement with the city where every day, I now see cars, as I’m driving during the day, with no driver in them. GM supposedly lost $561 million on Cruise in just the first three months of this year. Brooks: That’s how much cost it cost them to run that effort. Yeah. It’s a long way from breakeven. A long, long way from breakeven. So I mean, I guess the question is, can even a company like GM get from here to there, where it’s throwing off huge profits? Brooks: I wonder about that. We’ve seen a lot of the efforts shut down. It sort of didn’t make sense that there were so many different companies all trying to do it. Maybe, now that we’re merged down to one or two efforts and out of that, we’ll gradually get there. But here’s another case where the hype, I think, has slowed us down. In the ’90s, there was a lot of research, especially at Berkeley, about what sensors you could embed in freeways which would help cars drive without a driver paying attention. So putting sensors, changing the infrastructure, and changing the cars so they used that new infrastructure, you would get attentionless driving. “One of the standard processes has four teraflops—four million million floating point operations a second on a piece of silicon that costs 5 bucks. It’s just mind-blowing, the amount of computation.”—Rodney Brooks, Robust.AI But then the hype came: “Oh no, we don’t even need that. It’s just going to be a few years and the cars will drive themselves. You don’t need to change infrastructure.” So we stopped changing infrastructure. And I think that slowed the whole autonomous vehicles for commuting down by at least 10, maybe 20 years. There’s a few companies starting to do it now again. It takes a long time to make these things real. I don’t really enjoy driving, so when I see these pictures from popular magazines in the 1950s of people sitting in bubble-dome cars, facing each other, four people enjoying themselves playing cards on the highway, count me in. Brooks: Absolutely. And as a species, humanity, we have changed up our mobility infrastructure multiple times. In the early 1800s, it was steam trains. We had to do enormous changes to our infrastructure. We had to put flat rails right across countries. When we started adopting automobiles around the turn from the 19th to the 20th century, we changed the roads. We changed the laws. People could no longer walk in the middle of the road like they used to. Hulton Archive/Getty Images We changed the infrastructure. When you go from trains that are driven by a person to self-driving trains, such as we see in airports and a few out there, there’s a whole change in infrastructure so that you can’t possibly have a person walking on the tracks. We’ve tried to make this transition [to self-driving cars] without changing infrastructure. You always need to change infrastructure if you’re going to do a major change. You recently wrote that there will be no viable robotics applications that will harness the serious power of GPTs in any meaningful way. But given that, is there some other avenue of AI development now that will prove more beneficial for robotics, or more transformative? Or alternatively, will AI and robotics kind of diverge for a while, while enormous resources are put on large language models? Brooks: Well, let me give a very positive spin. There has been a transformation. It’s just taking a little longer to get there. Convolutional neural networks being able to label regions of an image. It’s not perceiving in the same way a person perceives, but we can label what’s there. Along with the end of Moore’s Law and Dennard scaling—this is allowing silicon designers to get outside of the idea of just a faster PC. And so now, we’re seeing very cheap pieces of very effective silicon that you put right with a camera. Instead of getting an image out, you now get labels out, labels of what’s there. And it’s pretty damn good. And it’s really cheap. So one of the standard processes has four teraflops—four million million floating point operations a second on a piece of silicon that costs 5 bucks. It’s just mind-blowing, the amount of computation. It would narrow floating point, 16-bit floating point, being applied to this labelling. We’re not seeing that yet in many deployed robots, but a lot of people are using that, and building, experimenting, getting towards product. So there’s a case where AI, convolutional neural networks—which, by the way, applied to vision is 10 years old—is going to make a difference. “Amazon really made life difficult for other suppliers by doing [robotics in the warehouse]. But 80 percent of warehouses in the U.S. have zero automation; only 5 percent are heavily automated.”—Rodney Brooks, Robust.AI And here’s one of my other “Seven Deadly Sins of AI Prediction.” It was how fast people think new stuff is going to be deployed. It takes a while to deploy it, especially when hardware is involved, because that’s just lots of stuff that all has to be balanced out. It takes time. Like the self-driving cars. So of the major categories of robotics—warehouse robots, collaborative robots, manufacturing robots, autonomous vehicles—which are the most exciting right now to you or which of these subdisciplines has experienced the most rapid and interesting growth? Brooks: Well, I’m personally working in warehouse robots for logistics. And your last company did collaborative robots. Brooks: Did collaborative robots in factories. That company was a beautiful artistic success, a financial failure, but— This is Rethink Robotics. Brooks: Rethink Robotics, but going on from that now, we were too early, and I made some dumb errors. I take responsibility for that. Some dumb errors in how we approached the market. But that whole thing is now—that’s going along. It’s going to take another 10 or 15 years. Collaborative robots will. Brooks: Collaborative robots, but that’s what people expect now. Robots don’t need to be in cages anymore. They can be out with humans. In warehouses, we’ve had more and more. You buy stuff at home, expect it to be delivered to your home. COVID accelerated that. People expect it the same day now in some places. Brooks: Well, Amazon really made life difficult for other suppliers by doing it. But 80 percent of warehouses in the U.S. have zero automation; only 5 percent are heavily automated. And those are probably the largest. Brooks: Yeah, they’re the big ones. Amazon has enormous numbers of those, for instance. But there’s a large number of warehouses which don’t have automation. So these are medium-size warehouses? Brooks: Yeah. 100,000 square feet, something of that sort, whereas the Amazon ones tend to be over a million square feet and you completely rebuild it [around the automation]. But these 80 percent are not going to get rebuilt. They have to adopt automation into an existing workflow and modify it over time. And there are a few companies that have been successful, and I think there’s a lot of room for other companies and other workflows. “[Warehouse workers] are not subject to the whims of the automation. They get to take over. When the robot’s clearly doing something dumb, they can just grab it and move it, and it repairs.”—Rodney Brooks, Robust.AI So for your current company, Robust.AI, this is your target. Brooks: That’s what we’re doing. Yeah. So what is your vision? So you have a program—you have a software suite called Grace and you also have a hardware platform called Carter. Brooks: Exactly. And let me say a few words about it. We start with the assumption that there are going to be people in the warehouses that we’re in. There’s going to be people for a long time. It’s not going to be lights-out, full automation because those 80 percent of warehouses are not going to rebuild the whole thing and put millions of dollars of equipment in. They’ll be gradually putting stuff in. So we’re trying to make our robots human-centered, we call it. They’re aware of people. They’re using convolutional neural networks to see that that’s a person, to see which way they’re facing, to see where their legs are, where their arms are. You can track that in real time, 30 frames a second, right at the camera. And knowing where people are, who they are. They are people not obstacles, so treating them with respect. But then the magic of our robot is that it looks like a shopping cart. It’s got handlebars on it. If a person goes up and grabs it, it’s now a powered shopping cart or powered cart that they can move around. So [the warehouse workers] are not subject to the whims of the automation. They get to take over. When the robot’s clearly doing something dumb, they can just grab it and move it, and it repairs. You are unusual for a technologist because you think broadly and widely, and you’re not afraid to have an opinion on things going on in the technical conversation. I mean, we’re living in really interesting times in this weird postpandemic world where lots of things seem to be at some sort of inflection. Are there any of these big projects now that fill you with hope and optimism? What are some big technological initiatives that give you hope or enthusiasm? Brooks: Well, here’s one that I haven’t written about, but I’ve been aware of and following. Climate change makes farming more difficult, more uncertain. So there’s a lot of work on indoor farming, changing how we do farming from the way we’ve done it for the 10,000 years we, as a species, have been farming that we know about, to technology indoors, and combining it with genetic engineering of microbes, combining it with a lot of computation, machine learning, getting the control loops right. There’s some fantastic things at small scale right now, producing interesting, good food in ways that are so much cleaner, use so much less water, and give me hope that we will be able to have a viable food supply. Not just horrible gunk to eat, but actually stuff that we like, with a way smaller impact on our planet than farm animals have and the widespread use of fertilizer, polluting the water supplies. I think we can get to a good, clean system of providing food for billions of people. I’m really hopeful about that. I think there’s a lot of exciting things happening there. It’s going to take 10, 20, 30 years before it becomes commonplace, but already, in my local grocery store in San Francisco, I can buy lettuce that’s grown indoors. So we’re seeing leafy greens getting out to the mainstream already. There’s a whole lot more coming.
- Budget Drones in Ukraine Are Redefining Warfareby Philip E. Ross on 17. Maja 2023. at 14:13
The war between Russia and Ukraine is making a lot of high-tech military systems look like so many gold-plated irrelevancies. That’s why both sides are relying increasingly on low-tech alternatives—dumb artillery shells instead of pricey missiles, and drones instead of fighter aircraft. “This war is a war of drones, they are the super weapon here,” Anton Gerashchenko, an adviser to Ukraine’s minister of internal affairs, told Newsweek earlier this year. In early May, Russia attributed explosions at the Kremlin to drones sent by Ukraine for the purpose of assassinating Vladimir Putin, the Russian leader. Ukraine denied the allegation. True, the mission to Moscow was ineffectual, but it is amazing that it could be managed at all. Like fighter planes, military drones started cheap, then got expensive. Unlike the fighters, though, they got cheap again. Drones fly slower than an F-35, carry a smaller payload, beckon ground fire, and last mere days before being shot out of the skies. But for the most part, the price is right: China’s DJI Mavic 3, used by both Russia and Ukraine for surveillance and for delivering bombs, goes for around US $2,000. You can get 55,000 of them for the price of a single F-35. Also, they’re much easier to maintain: When they break, you throw them out, and there’s no pilot to be paraded through the streets of the enemy capital. Smoke clouds rise on a flat-screen monitor above a struck target, as a Ukrainian serviceman of the Adam tactical group operates a drone to spot Russian positions near the city of Bakhmut, Donetsk region, on 16 April 2023, amid the Russian invasion of Ukraine. Sergey Shestak/AFP/Getty Images You can do a lot with 55,000 drones. Shovel them at the foe and one in five may make it through. Yoke them together and send them flocking like a murmuration of starlings, and they will overwhelm antiaircraft defenses. Even individually they can be formidable. One effective tactic is to have a drone “loiter” near a point where targets are expected to emerge, then dash in and drop a small bomb. Videos posted on social media purport to show Ukrainian remote operators dropping grenades on Russian troops or through the hatches of Russian armored vehicles. A drone gives a lot of bang for the buck, as utterly new weapons often do. Over time, as a weapons system provokes countermeasures, their designers respond with improvements, and the gold-plating accumulates. In 1938, a single British Spitfire cost £9,500 to produce, equivalent to about $1 million today. In the early 1950s the United States F-86 Sabre averaged about $250,000 apiece, about $3 million now. The F-35, today’s top-of-the-line U.S. fighter, starts at $110 million. Behold the modern-day fighter plane: the hypertrophied product of the longest arms race since the days of the dreadnought. “In the year 2054, the entire defense budget will purchase just one aircraft,” wrote Norman Augustine, formerly Under Secretary of the Army, back in 1984. “This aircraft will have to be shared by the Air Force and Navy 3 1/2 days each per week except for leap year, when it will be made available to the Marines for the extra day.” Like fighter planes, military drones started cheap, then got expensive. Unlike the fighters, though, they got cheap again. “Sophisticated tech is more readily available, and with AI advances and the potential for swarms, there’s even more emphasis on quantity.”—Kelly A. Greico, Stimson Center Back in 1981, Israel sent modest contraptions sporting surveillance cameras in its war against Syria, to some effect. The U.S. military took hold of the concept, and in its hands, those simple drones morphed into Predators and Reapers, bomber-size machines that flew missions in Iraq and Afghanistan. Each cost millions of dollars (if not tens of millions). But a technologically powerful country needn’t count the cost; the United States certainly didn’t. “We are a country of technologists, we love technological solutions,” says Kelly A. Grieco, a strategic analyst at the Stimson Center, a think tank in Washington, D.C. “It starts with the Cold War: Looking at the Soviet Union, their advantages were in numbers and in their close approach to Germany, the famous Fulda Gap. So we wanted technology to offset the Soviet numerical advantage.” A lot of the cost in an F-35 can be traced to the stealth technology that lets it elude even very sophisticated radar. The dreadnoughts of old needed guns of ever-greater range—enough finally to shoot beyond the horizon—so that the other side couldn’t hold them at arm’s length and pepper them with shells the size of compact cars. Arms races tend to shift when a long peacetime buildup finally ends, as it has in Ukraine. “The character of war has moved back toward quantity mattering,” Grieco says. “Sophisticated tech is more readily available, and with AI advances and the potential for swarms, there’s even more emphasis on quantity.” A recent research paper she wrote with U.S. Air Force Col. Maximilian K. Bremer notes that China has showcased such capabilities, “including a swarm test of 48 loitering munitions loaded with high-explosive warheads and launched from a truck and helicopter.” What makes these things readily available—as the nuclear and stealth technologies were not—is the Fourth Industrial Revolution: 3D printing, easy wireless connections, AI, and the big data that AI consumes. These things are all out there, on the open market. “You can’t gain the same advantage from simply possessing the technology,” Grieco says. “What will become more important will be how you use it.” One example of how experience has changed use comes from the early days of the war in Ukraine. That country scored early successes with the Baykar Bayraktar TB2, a Turkish drone priced at an estimated at $5 million each, about one-sixth as much as the United States’ Reaper, which it broadly resembles. That’s not cheap, except by U.S. standards. Right now the militaries of the world are working on ways to shoot down small drones with directed-energy weapons based on lasers or microwaves. “The Bayraktar was extremely effective at first, but after Russia got its act together with air defense, they were not as effective by so large a margin,” says Zach Kallenborn, a military consultant associated with the Center for Strategic and International Studies, a think tank in Washington, D.C. That, he says, led both sides to move to masses of cheaper drones that get shot down so often they have a working life of maybe three to four days. So what? It’s a good cost-benefit ratio for drones as cheap as Ukraine’s DJIs and for Russia’s new equivalent, the Shahed-136, supplied by Iran. Ukraine has also resorted to homemade drones as an alternative to long-range jet fighters and missiles, which Western donors have so far refused to provide. It recently launched such drones from its own territory to targets hundreds of kilometers inside Russia; Ukrainian officials said that they were working on a model that would fly about 1,000 kilometers. Every military power is now staring at these numbers, not least the United States and China. If those two powers ever clash, it would likely be over Taiwan, which China says it will one day absorb and the United States says it will defend. Such a far-flung maritime arena would be very different from the close-in land war going on now in Eastern Europe. The current war may therefore not be a good guide to future ones. “I don’t buy that drones will transform all of warfare. But even if they do, you’d need to get them all the way to Taiwan. And to do that you’d need [aircraft] carriers,” says Kallenborn. “And you’d need a way to communicate with drones. Relays are possible, but now satellites are key, so China’s first move might be to knock out satellites. There’s reason to doubt they would, though, because they need satellites, too.” In every arms race there is always another step to take. Right now the militaries of the world are working on ways to shoot down small drones with directed-energy weapons based on lasers or microwaves. The marginal cost of a shot would be low—once you’ve amortized the expense of developing, making, and deploying such weapons systems. Should such antidrone measures succeed, then succeeding generations of drones will be hardened against them. With gold plating.
- Andrew Ng: Unbiggen AIby Eliza Strickland on 9. Februara 2022. at 15:31
Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A. Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias. Andrew Ng on... What’s next for really big models The career advice he didn’t listen to Defining the data-centric AI movement Synthetic data Why Landing AI asks its customers to do the work The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way? Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions. When you say you want a foundation model for computer vision, what do you mean by that? Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them. What needs to happen for someone to build a foundation model for video? Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision. Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries. Back to top It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users. Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation. “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.” —Andrew Ng, CEO & Founder, Landing AI I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince. I expect they’re both convinced now. Ng: I think so, yes. Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.” Back to top How do you define data-centric AI, and why do you consider it a movement? Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data. When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline. The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up. You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them? Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn. When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set? Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system. “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.” —Andrew Ng For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance. Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training? Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle. One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way. When you talk about engineering the data, what do you mean exactly? Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity. For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow. Back to top What about using synthetic data, is that often a good solution? Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development. Do you mean that synthetic data would allow you to try the model on more data sets? Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category. “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.” —Andrew Ng Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data. Back to top To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment? Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data. One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory. How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up? Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations. In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists? So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work. Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains. Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement? Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it. Back to top This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”
- How AI Will Change Chip Designby Rina Diane Caballar on 8. Februara 2022. at 14:00
The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process. Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version. But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform. How is AI currently being used to design the next generation of chips? Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider. Heather GorrMathWorks Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI. What are the benefits of using AI for chip design? Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design. So it’s like having a digital twin in a sense? Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end. So, it’s going to be more efficient and, as you said, cheaper? Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering. We’ve talked about the benefits. How about the drawbacks? Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years. Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together. One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge. How can engineers use AI to better prepare and extract insights from hardware or sensor data? Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start. One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI. What should engineers and designers consider when using AI for chip design? Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team. How do you think AI will affect chip designers’ jobs? Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip. How do you envision the future of AI and chip design? Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.
- Atomically Thin Materials Significantly Shrink Qubitsby Dexter Johnson on 7. Februara 2022. at 16:12
Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality. IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability. Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100. “We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.” The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit. Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C). Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another. As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance. In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates. “We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics. On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas. While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor. “What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.” This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits. “The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang. Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.