IEEE Novosti

IEEE Spectrum IEEE Spectrum

  • This Idea Wasn't All Wet: The Sensing Water-Saving Shower Head Debuts
    by Tekla S. Perry on 2. Oktobra 2022. at 16:00

    For Evan Schneider, the family dinner table is a good place for invention. “I’m always, ‘Wouldn’t it be cool if this or that,’” he says, “and people would humor me.” In 2012, with California in the midst of a severe drought, Schneider, then a mechanical engineering graduate student at Stanford University, once again tossed out a “cool idea.” He imagined a shower head that would sense when the person showering moved out from under the stream of water. The shower head would then automatically turn the water off, turning it back on again when the person moved back into range. With such a device, he thought, people could enjoy a long shower without wasting water. “But turning the water on and off manually didn’t make sense in our house,” Schneider said. “We had separate knobs for hot and cold, and another one to switch from tub to shower, so you’d have to adjust the water every time you turned it back on. You’d waste more water than you saved. Plus a shower is a blissful time of relaxation, you don’t want to stop the party halfway.” Ten years and many starts and stops later, that sensing showerhead is now shipping to customers from Oasense, a company incorporated in 2019. “The general idea is really simple,” Schneider says. “A lot of people have said they also thought of this idea. And I’m sure that’s true, but there were a lot of devils in the details.” Oasense’s team has been granted several patents related to their device, the first filed by Schneider in 2016. Schneider’s development path started soon after that dinner-table conversation. First, he confirmed that showers were a big part of water usage for a typical household, and that no such device was already on the market. He collected off-the-shelf components, including an infrared sensor scavenged from a high-end automatic faucet, designed a prototype in a CAD system, printed out the plastic parts using a 3-D printer, and assembled it. With 4 AA batteries as a power source, the gadget would operate for about a year, thanks to his choice of a latching solenoid valve, one that uses power to switch from open to closed but doesn’t draw any power to hold in one state or another. The prototype worked well enough that his parents were willing to throw out their standard showerhead. He assembled dozens of them and distributed them to friends and family—anyone willing to try. Oasense co-founder Ted Li assembles an early version of the company’s sensing shower head.Oasense In 2016, Schneider decided to run a Kickstarter campaign to see if the gadget could attract broad interest. The Kickstarter ultimately failed; it drew a decent number of potential buyers, but, says Schneider, “I had set the bar high, because I was busy doing other things, and if I switched to this, I wanted to make sure it would have a good chance of working out. It didn’t meet that bar; it raised about $34,000 out of its $75,000 goal.” So Schneider put his shower head idea on hold. Instead, he focused on expanding a burgeoning small business that he was also passionate about—3-D printing prototypes and various parts for hardware companies. But the shower head wasn’t done with him. In 2017 someone who Schneider had never met edited the video from the Kickstarter pitch and shared it on Facebook. This time, the video got far more attention—millions of views in just weeks. Unfortunately, the timing couldn’t have been worse. Schneider was dealing with a flare-up of a chronic illness and his 3-D printing business was at a critical growth period. “I had wanted this for years, but it was the worst time for it to happen,” he says. “I still believed in the product,” Schneider continued, “but I knew it needed improvements and more attention than I was able to give it. I tried for a couple of weeks to reply to all these people contacting me, thousands of them, but it was too much. I was planning to shelve it.” That’s when Chih-Wei Tang, a friend from Stanford’s mechatronics program who had been an early backer of the project on Kickstarter, reached out to Schneider. Tang, who was working as a technical product manager at the Ford Greenfield Labs, convinced Schneider that he could form a team capable of commercializing the product. Tang pulled in his friend Ted Li, who had just left Apple after managing display technology for the iPhone and Apple Watch. Tang and Li devoted themselves to the project full-time, Schneider helped part-time as needed. The three started by trying to better adapt an off-the-shelf sensor, but ended up designing a sensor suite with custom hardware and algorithms. They incorporated as Oasense in December 2019 as co-founders. In late 2020, the company went out for funding, and brought in about $1 million from angel investors, friends, and family. In addition to the founders, Oasense now has four full-time and three part-time employees. Oasense co-founders [from left] Ted Li, Evan Schneider, and Chih-Wei Tang.Oasense The current version of the device includes several sensors (across a wide range of light wavelengths) and software that allow the sensors to self-calibrate since every shower environment is different in terms of light, reflectivity, size, and design. Calibration happens during warm-up, when the person showering is unlikely to be standing in the stream. A temperature sensor determines when this warm-up period is over and cuts the flow if the user hasn’t moved under the shower head. The redesign also replaced the AA batteries with a turbine that generates power from the water flow and sends it to a small rechargeable battery sealed inside the device. Says Tang, “It does seem like someone would have built this before, but it turns out to be really complicated. For example, one problem that affects the noise in the sensor signals is fog. In a hot shower, after 3 minutes, our original sensor was blinded by fog. When we designed our new sensors, we had to make sure that didn’t happen. “And these sensors are power hungry and need to be on for the duration of the shower, whether water is flowing or not, so generator and sensor efficiency had to be maximized.” Oasense officially launched its product, Reva, in August. The company is working to figure out the best way to sell the gadget; it is now just doing direct sales at $350 per self-installable unit. “Two trends are coming together,” Tang says. “Sustainability is what everyone has to be about these days, and technology is invading every corner of our homes. Using technology, we designed sustainability into a product that doesn’t compromise quality or the experience, it just addresses the problem.”

  • For Better or Worse, Tesla Bot Is Exactly What We Expected
    by Evan Ackerman on 1. Oktobra 2022. at 18:50

    At the end of Tesla’s 2021 AI Day last August, Elon Musk introduced a concept for “Tesla Bot,” an electromechanically-actuated, autonomous bipedal “general purpose” humanoid robot. Musk suggested that a prototype of Tesla Bot (also called “Optimus”) would be complete within the next year. After a lot of hype, a prototype of Tesla Bot was indeed unveiled last night at Tesla’s 2022 AI Day. And as it turns out, the hype was just that—hype. While there’s absolutely nothing wrong with the humanoid robot that Musk very briefly demonstrated on stage, there’s nothing uniquely right, either. We were hoping for (if not necessarily expecting) more from Tesla. And while the robot isn’t exactly a disappointment, there’s very little to suggest that it disrupts robotics the way that SpaceX did for rockets or Tesla did for electric cars. You can watch the entire 3+ hour livestream archived on YouTube here (which also includes car stuff and whatnot), but we’re just going to focus on the most interesting bits about Tesla Bot/Optimus. Setting Expectations Before revealing the robot, Musk attemped to set reasonable expectations for the prototype.Tesla These quotes are all from Musk. “I do want to set some expectations with respect to our Optimus robot… Last year was just a person in a robot suit, but we’ve come a long way, and compared to that, it’s going to be very impressive.” It’s far, far too late for Musk to be attempting to set reasonable expectations for this robot (or Tesla’s robotics program in general). Most roboticists know better than to use humans when setting expectations for humanoid robots, because disappointment is inevitable. And trying to save it at the literal last minute by saying, “compared to not having a robot at all, our robot will be very impressive,” while true, is not going to fix things. “I think there’s some potential that what we’re doing here at Tesla could make a meaningful contribution to AGI.” Yeah, I’m not touching that. Right before the robot was brought on stage, one of the engineers made clear that this was going to be the first time that the robot would be walking untethered and unsupported. If true, that’s bonkers, because why the heck would you wait until this moment to give that a try? I’m not particularly impressed, just confused. For some context on what you’re about to see, a brief callback to a year ago last August, when I predicted what was in store for 2022: It’s possible, even likely, that Tesla will build some sort of Tesla Bot by sometime next year, as Musk says. I think that it won’t look all that much like the concept images in this presentation. I think that it’ll be able to stand up, and perhaps walk. Maybe withstand a shove or two and do some basic object recognition and grasping. And I think after that, progress will be slow. But the hard part is not building a robot, it’s getting that robot to do useful stuff, and I think Musk is way out of his depth here. Tesla Bot Development Platform Demo I’m reminded of the 2015 DARPA Robotics Challenge, because many of the humanoid platforms looked similar to the way Tesla Bot looks. I guess there’s only so much you can do with a mostly naked electromechanical humanoid in terms of form factor, but at first glance there’s nothing particularly innovative or futuristic about Tesla’s design. If anything, the robot’s movement is not quite up to DRC standards, since it looks like it would have trouble with any kind of accidental contact or even a bit of non-level floor (and Musk suggested as much). On stage, the robot did very little. It walked successfully, but not very dynamically. The “moves” it made may well have been entirely scripted, so we don’t know to what extent the robot can balance on its own. I’m glad it didn’t fall on its face, but if it had, I wouldn’t have been surprised or judged it too harshly. Tesla showed videos of the robot watering plants, carrying a box, and picking up a metal bar at a factory. Tesla After the very brief live demo, Musk showed some video clips of the prototype robot doing other things (starting at 19:30 in the livestream). These clips included the robot walking while carrying a box of unspecified weight and placing it on a table, and grasping a watering can. The watering can was somewhat impressive, because gripping that narrow handle looks tricky. “The robot can actually do a lot of more than we’ve just showed you. We just didn’t want it to fall on its face.”—Elon Musk However, despite the added footage from the robot’s sensors we have no idea how this was actually done; whether it was autonomous or not; or how many tries it took to get right. There’s also a clip of a robot picking an object and attempting to place it in a bin, but the video cuts right before the placement is successful. This makes me think that we’re seeing carefully curated best-case scenarios for performance. That was our rough development robot, using semi-off-the-shelf actuators, but we’ve gone a step farther than that already. We actually have an Optimus bot with fully Tesla-designed actuators, battery pack, control system, everything—it wasn’t quite ready to walk, but we wanted to show you something that’s fairly close to what will go into production. Tesla Bot Latest Generation Demo This looks a bit more like the concept that Tesla showed last year, although obviously it’s less functional than the other prototype we saw. It’s tempting to project the capabilities of the first robot onto the second robot, but it would be premature to do so. Here you’re seeing Optimus with the degrees of freedom that we expect to have in the Optimus production unit, which is the ability to move all the fingers independently and opposable thumbs, so that it’s able to operate tools and do useful things. Just like last year, Musk is implying that the robot will be able to operate tools and do useful things because it has the necessary degrees of freedom. But of course the hardware is only the first step towards operating tools and doing useful things, and the software is, I would argue, much harder and far more time consuming, and Tesla seems to have barely started work on that side of things. Our goal is to make a useful humanoid robot as quickly as possible. We’ve designed it using the same discipline that we use in designing the car, which is to design it for manufacturing, such that it’s possible to make the robot at high volume with low cost and high reliability. That’s incredibly important.…Optimus is designed to be an extremely capable robot, but made in very high volume, ultimately millions of units. And it is expected to cost much less than a car—much less than $20,000 would be my guess. I generally agree with Musk here, in that historically, humanoid robots were not designed for manufacturability. This is changing, however, and I think that other companies likely have a head start over Tesla in manufacturability now. But it’s entirely possible that Tesla will be able to rapidly catch up if they’re able to leverage all that car building expertise into robot building somehow. It’s not a given that it’ll work that way, but it’s a good idea, potentially a big advantage. As for the production volume and cost, I have no idea what “expected” means. This line got some applause, but as far as I’m concerned, these numbers are basically meaningless at the moment. You’ve all seen very impressive humanoid robot demonstrations, and that’s great, but what are they missing? They’re missing a brain—they don’t have the intelligence to navigate the world by themselves. I’m not exactly sure who Musk is throwing shade at, but there are only a couple of companies who’d probably qualify with “very impressive humanoid robot demonstrations.” And those companies do, in fact, have robots that broadly have the kind of intelligence that allows them to navigate at least some of the world by themselves, much better than we have seen from Optimus at this point. If Musk is saying that those robots are insufficiently autonomous or world-aware, then okay, but so far Tesla has not done better, and doing better will be a lot of work. The team has put in an incredible amount of work, seven days a week, to get to the demonstration today. I’m super proud, and they’ve really done a great job. While the actual achievements here have been mercilessly overshadowed by the hype surrounding them, this is truly an astonishing amount of work to be done in such a short time, and Tesla’s robotics team should be proud of what they’ve accomplished. And while there will inevitably be comparisons to other companies with humanoid robots, it’s critical to remember the context here: Tesla has made this happen in something like eight months. It’s nuts. There’s still a lot of work to be done to refine Optimus and improve it, and that’s really why we’re holding this event—to convince some of the most talented people in the world to join Tesla and help make it a reality, help bring it to fruition at scale so that it can help millions of people. I can see the appeal of Tesla for someone who wants to start a robotics career, since you’d get to work on a rapidly evolving hardware platform backed by what I can only assume is virtually unlimited resources. …This means a future of abundance, a future where there is no poverty, where you can have whatever you want in terms of products and services. It really is a fundamental transformation of civilization as we know it. Maybe just, like, get your robot to reliably and affordably do A Single Useful Thing, first? Three versions of the Optimus design: Concept, Development Platform, and Latest Generation. Tesla Musk takes a break after this, and we get some actual specific information from a series of Tesla robotics team members about the latest generation Optimus. Optimus Hardware 28 degrees of freedom 11 additional degrees of freedom in each hand 2.3 kWh 52V battery pack, “perfect for about a full day of work” We’ll come back to the hands, but that battery really stands out for being able to power the robot for an entire day(ish). Again, we have to point out that until Tesla actually demonstrates this, it’s not all that meaningful, but Tesla does know a heck of a lot about power systems and batteries and I’m guessing that they’ll be able to deliver on this. Tesla is using simulations to design the robot’s structure so that it can suffer minimal damage after a fall.Tesla I appreciate that Tesla is thinking very early about how to structure their robot to be able to fall down safely and get up again with only superficial damage. Although, they don’t seem to be taking advantage of any kind of protective motion for fall mitigation, which is an active area of research elsewhere. And what is not mentioned in this context is safety of others. I’m glad the robot won’t get damaged all that much when it falls, but can Tesla say the same for whoever might be standing next to it? Optimus will use six different actuators: three rotary and three linear units.Tesla Tesla’s custom actuators seem very reasonable. Not special, particularly, but Tesla has to make its own actuators if it needs a lot of them, which it supposedly will. I’d expect these to be totally decent considering the level of mechanical expertise Tesla has, but as far as I can tell nothing here is crazy small or cheap or efficient or powerful or anything like that. And it’s very hard to tell from these slides and from the presentation just how well the actuators are going to work, especially for dynamic motions. The robot’s software has a lot of catching up to do first. Optimus will feature a bio-inspired hand design with cable-driven actuators.Tesla Each hand has six cable-driven actuators for fingers and thumb (with springs to provide the opening force), which Tesla chose for simplicity and to minimize part count. This is perhaps a little surprising, since cable drives typically aren’t as durable and can be more finicky to keep calibrated. The five-finger hand is necessary, Tesla says, because Optimus will be working with human tools in human environments. And that’s certainly one perspective, although it’s a big tradeoff in complexity. The hand is designed to carry a 9kg bag. Optimus Software Tesla is using software components developed for its vehicles and porting them to the robot’s environment.Tesla Software! The following quote comes from Milan Kovac, who’s on the autonomy team. All those cool things we showed earlier in the videos were possible in just a few months, thanks to the amazing work we’ve done on Autopilot over the past few years. Most of those components ported quite easily over to the bot’s environment. If you think about it, we’re just moving from a robot on wheels, to a robot on legs. Some of the components are similar, and some others required more heavy lifting. I still fundamentally disagree with the implied “humanoid robots are just cars with legs” thing, but it’s impressive that they were able to port much at all—I was highly skeptical of that last year, but I’m more optimistic now, and being able to generalize between platforms (on some level) could be huge for both Tesla and for autonomous systems more generally. I’d like more details on what was easy, and what was not. Tesla showed how sensing used in its vehicles can help the Optimus robot navigate.Tesla What we’re seeing above, though, is one of the reasons I was skeptical. That occupancy grid (where the robot’s sensors are detecting potential obstacles) on the bottom is very car-ish, in that the priority is to make absolutely sure that the robot stays very far away from anything it could conceivably run into. By itself, this won’t transfer well to a humanoid robot that needs to directly interact with objects to do useful tasks. I’m sure there are lots of ways to adapt the Tesla car’s obstacle avoidance system, but that’s the question: how hard is that transfer, and is it better than using a solution developed specifically for mobile manipulators? Tesla explained the challenges of dynamic walking in humanoid robots, and its approach to motion planning.Tesla The next part of the presentation focused on some motion planning and state estimation stuff that was very basic, as far as I could make out. There’s nothing wrong with the basics, but it’s slightly weird that Tesla spent so much time on this. I guess it’s important context for most of the people watching, but they sort of talked about it like they’d discovered how to do all of this stuff themselves, which I hope they didn’t, because again, very, very basic stuff that other humanoid robots have been doing for a very long time. Tesla adopted a traditional approach to motion control, based on a model of the robot and state estimation.Tesla One more quote from Milan Kovac: Within the next few weeks, we’re going to start focusing on a real use case at one of our factories. We’re really going to try to nail this down, and iron out all of the elements needed to deploy this product in the real world. I’m pretty sure we can get this done within the next few months or years, and make this product a reality and change the entire economy. Ignoring that last bit about changing the entire economy, and possibly also ignoring the time frame because “next few months or years” is not particularly meaningful, the push to make Tesla Bot useful is another substantial advantage that Tesla has. Unlike most companies working on humanoid robots, Tesla is potentially its own biggest customer, at least initially, and having these in-house practical tasks for the robot to train on could really help accelerate development. “Optimus is designed to be an extremely capable robot, but made in very high volume, ultimately millions of units. And it is expected to cost much less than a car—much less than $20,000 would be my guess.”—Musk However, I’m having trouble imagining what Tesla Bot would actually do in a factory that would be uniquely useful and not done better by a non-humanoid robot. I’m very interested to see what Tesla comes up with here, and whether they can make it happen in months (or years). I suspect that it’s going to be much more difficult than they are suggesting that it will be, especially as they get to 90% of where they want to be and start trying to crack that last 10% that’s necessary for something reliable. This was the end of the formal presentation about Optimus, but there was a Q&A at the end with Musk where he gave some additional information about the robot side of things. He also gave some additional non-information, which is worth including just in case you haven’t yet had enough eye rolling for one day. Audience Q&A Musk expects Optimus to cost less than a car, “much less than $20,00 dollars would be my guess,” he said.Tesla Our goal with Optimus is to have a robot that’s maximally useful as quickly as possible. There are a lot of ways to solve the various problems of a humanoid robot, and we’re probably not barking up the right tree on all the technical solutions. We’re open to evolving the technical solutions that you see here over time. But we had to pick something. We’re trying to follow the goal of fastest pathway to a useful robot that can be made at volume. And we’re going to test the robot internally in our factory to see how useful it is, because you have to close the loop on reality to confirm that the robot is, in fact, useful. This is a variation on the minimum-viable product idea, although it seems to be more from the perspective of making a generalist robot, which is somewhat at odds with something minimally viable. It’s good that Musk views the hardware as something in flux, and that he’s framed everything within a plan for volume production. This isn’t the only way to do it—you can first build a useful robot and then figure out how to make it cheaper, but Tesla’s approach could get them to production faster. If, that is, they are able to confirm that the robot is in fact useful. I’m still not convinced that it will be, at least not on a time scale that will satisfy Musk. I think we’ll want to have really fun versions of Optimus. Optimus can be utilitarian, and do tasks, but it can also be like a friend, and a buddy, and hang out with you. I’m sure people will think of great uses for this robot. Once you have the core intelligence and actuators figured out, you can put all sorts of costumes, I guess, on the robot. While Musk seems to be mostly joking here, the whole “it’s going to be your friend” is really not a good perspective to bring to a robot like this, in my opinion. Or probably any robot, at all honestly. We want over time for Optimus to be the kind of android that you see in sci-fi, like in Star Trek: The Next Generation, like Data. But obviously we could program the robot to be less robot-like and more friendly, and it can obviously learn to emulate humans and feel very natural. Less robot-like and more friendly than a human pretending to be a robot trying to be a human? Good luck with that. We’re going to start Optimus with very simple tasks in the factory, like maybe carrying a part from one place to another, or loading a part into a conventional robot cell. We’ll start with how do we make it useful at all, and then gradually expand the number of situations in which it’s useful. I think the number of situations where Optimus is useful will grow exponentially. I think it’s more likely that in the short-to-medium term, Tesla will struggle to find situations where Optimus is uniquely useful in an efficient and cost-effective way. In terms of when people can order one, I think it’s not that far away. I don’t know, I’d say within three years, probably not more than five years. Uh. Maybe as a research platform? I think Optimus is going to be incredible in five years. In 10 years, mind-blowing. I’m really interested to see that happen, and I hope you are too. Despite my skepticism on the time frame here, five years is a long time for any robot, and ten years is basically forever. I’m also really interested to see these things happen, although Musk’s definitions of “incredible” and “mind-blowing” may be much different than mine. But we’ll see, won’t we? What’s Next? Tesla’s AI Day serves as a recruitment event for the company. “There’s still a lot of work to be done to refine Optimus and improve it, and that’s really why we’re holding this event—to convince some of the most talented people in the world to join Tesla,” Musk said.Tesla I think Elon Musk now has a somewhat better idea of what he’s doing with Tesla Bot. The excessive hype is still there, but now that they’ve actually built something, Musk seems to have a much better idea of how hard it actually is. Things are only going to get more difficult from here. Most of what we saw in the presentation was hardware. And hardware is important and a necessary first step, but software is arguably a much more significant challenge when it comes to making robotics useful in the real world. Understanding and interacting with the environment, reasoning and decision-making, the ability to learn and be taught new tasks—these are all necessary pieces to the puzzle of a useful robot that Tesla is trying to put together, but they’re all also extremely difficult, cutting edge problems, despite the enormous amount of work that the research community has put into them. And so far, we (still) have very little indication that Tesla is going to be any better at tackling this stuff than anyone else. There doesn’t appear to be anything all that special or exciting from Tesla that provides any unique foundation for Musk’s vision in a way that’s likely to allow them to outpace other companies working on similar things. I’ll reiterate what I said a year ago: the hard part is not building a robot, it’s getting that robot to do useful stuff. “I think Optimus is going to be incredible in five years. In 10 years, mind-blowing. I’m really interested to see that happen, and I hope you are too.”—Musk I could, of course, be wrong. Tesla likely has more resources to throw at this problem than almost anyone else. Maybe the automotive software will translate much better and faster than I think it will. There could be a whole bunch of simple but valuable use cases in Tesla’s own factories that will provide critical stepping stones for Optimus. Tesla’s battery and manufacturing expertise could have an outsized influence on the affordability, reliability, and success of the robot. Their basic approach to planning and control could become a reliable foundation that will help the system mature faster. And the team is obviously very talented and willing to work extremely hard, which could be the difference between modest success and slow failure. Honestly, I would love to be wrong. We’re just starting to see some realistic possibilities with commercial legged and humanoid robots. There are lots of problems to solve, but also lots of potential, and Tesla finding success would be a huge confidence boost in commercial humanoids broadly. We can also hope that all of the resources that Tesla is putting towards Optimus will either directly or indirectly assist other folks working on humanoid robots, if Tesla is willing to share some of what they learn. But as of today, this is all just hoping, and it’s on Tesla to actually make it happen.

  • "Nothing About Us Without Us"
    by Harry Goldstein on 1. Oktobra 2022. at 15:00

    Before we redesigned our website a couple of years ago, we took pains to have some users show us how they navigate our content or complete specific tasks like leaving a comment or listening to a podcast. We queried them about what they liked or didn’t like about how our content is presented. And we took onboard their experiences and designed a site and a magazine based on that feedback. So when I read this month’s cover story by Britt Young about using a variety of high- and low-tech prosthetic hands, I was surprised to learn that much bionic-hand development is conducted without taking the lived experience of people who use artificial hands into account. I shouldn’t have been. While user-centered design is a long-standing practice in Web development, it doesn’t seem to have expanded deep into other product-development practices. A quick search on the IEEE Xplore Digital Library tallied less than 2,000 papers (out of 5.7 million) on “user-centered design.” Five papers bubbled up when searching “user-centered design” and “prosthesis.” Young, who is working on a book about the prosthetics industry, was in the first cohort of toddlers fitted with a myoelectric prosthetic hand, which users control by tensing and relaxing their muscles against sensors inside the device’s socket. Designed by people Young characterizes as “well-intentioned engineers,” these technologically dazzling hands try to recreate in all its complex glory what Aristotle called “the instrument of instruments.” “It’s more important that we get to live the lives we want, with access to the tools we need, than it is to make us look like everyone else.” While high-tech solutions appeal to engineers, Young makes the case that low-tech solutions like the split hook are often more effective for users. “Bionic hands seek to make disabled people ‘whole,’ to have us participate in a world that is culturally two-handed. But it’s more important that we get to live the lives we want, with access to the tools we need, than it is to make us look like everyone else.” As Senior Editor Stephen Cass pointed out to me, one of the rallying cries of the disabled community is “nothing about us, without us.” It is a response to a long and often cruel history of able-bodied people making decisions for people with disabilities. Even the best intentions don’t make up for doing things for disabled people instead of with them, as we see in Young’s article. Assistive and other technologies can indeed have huge positive impacts on the lives of people with disabilities. IEEE Spectrum has covered many of these developments over the decades, but generally speaking it has involved able-bodied journalists writing about assistive technology, often with the perspective of disabled people relegated to a quote or two, if it was included at all. We are fortunate now to have the chance to break that pattern, thanks to a grant from the IEEE Foundation and the Jon C. Taenzer Memorial Fund. With the grant, Spectrum is launching a multiyear fellowship program for disabled writers. The goal is to develop writers with disabilities as technology journalists and provide practical support for their reporting. These writers will investigate not just assistive technologies, but also look at other technologies with ambitions for mass adoption through a disability lens. Will these technologies be built with inclusion in mind, or will disabled people be a literal afterthought? Our first step will be to involve people with disabilities in the design of the program, and we hope to begin publishing articles by fellows early next year. This article appears in the October 2022 print issue.

  • Remembering LED Pioneer Nick Holonyak
    by Amanda Davis on 30. Septembra 2022. at 18:00

    Nick Holonyak, Jr. holds a part of a stoplight that utilizes a newer LED designed by his students. Ralf-Finn Hestoft/Getty Images Nick Holonyak Jr., a prolific inventor and longtime professor of electrical engineering and computing, died on 17 September at the age of 93. In 1962, while working as a consulting scientist at General Electric’s Advanced Semiconductor Laboratory, he invented the first practical visible-spectrum LED. It is now used in light bulbs and lasers. Holonyak left GE in 1963 to become a professor of electrical and computer engineering and researcher at his alma mater, the University of Illinois Urbana-Champaign. He retired from the university in 2013. He received the 2003 IEEE Medal of Honor for “a career of pioneering contributions to semiconductors, including the growth of semiconductor alloys and heterojunctions, and to visible light-emitting diodes and injection lasers.” LED and other semiconductor industry breakthroughs After Holonyak earned bachelor’s, master’s, and doctoral degrees in electrical engineering from the University of Illinois, he was hired in 1954 as a researcher at Bell Labs, in Murray Hill, N.J. There he investigated silicon-based electronic devices. He left in 1955 to serve in the U.S. Army Signal Corps, and was stationed at Fort Monmouth, N.J., and Yokohama, Japan. After being discharged in 1957, he joined GE’s Advanced Semiconductor Laboratory, in Syracuse, N.Y. While at the lab, he invented a shorted emitter thyristor device. The four-layered semiconductor is now found in light dimmers and power tools. In 1962 he invented the red-light semiconductor laser, known as a laser diode, which now is found in cellphones as well as CD and DVD players. Later that year, he demonstrated the first visible LED—a semiconductor source that emits light when current flows through it. LEDs previously had been made of gallium arsenide. He created crystals of gallium arsenide phosphide to make LEDs that would emit visible, red light. His work led to the development of the high-brightness, high-efficiency white LEDs that are found in a wide range of applications today, including smartphones, televisions, headlights, traffic signals, and aviation. Pioneering research at the University of Illinois Holonyak left GE in 1963 and joined the University of Illinois as a professor of electrical and computer engineering. In 1977 he and his doctoral students demonstrated the first quantum well laser, which later found applications in fiber optics, CD and DVD players, and medical diagnostic tools. The university named him an endowed-chair professor of electrical and computer engineering and physics in 1993. The position was named for John Bardeen, an honorary IEEE member who had received two Nobel Prizes in Physics as well as the 1971 IEEE Medal of Honor. Bardeen was Holonyak’s professor in graduate school. The two men collaborated on research projects until Bardeen’s death in 1991. Together with IEEE Life Fellow Milton Feng, Holonyak led the university’s transistor laser research center, which was funded by the U.S. Defense Advanced Research Projects Agency. There they developed transistor lasers that had both light and electric outputs. The innovation enabled high-speed communications technologies. More recently, Holonyak developed a technique to bend light within gallium arsenide chips, allowing them to transmit information by light rather than electricity. He supervised more than 60 graduate students, many of whom went on to become leaders in the electronics field. Queen Elizabeth prize, Draper prize, and other awards Holonyak received last year’s Queen Elizabeth Prize for Engineering; the National Academy of Engineering’s 2015 Draper Prize; the 2005 Japan Prize; and the 1989 IEEE Edison Medal. In 2008 he was inducted to the National Inventors Hall of Fame, in Akron, Ohio. He was a fellow of the American Academy of Arts and Sciences, the American Physical Society, and Optica. He was also a foreign member of the Russian Academy of Sciences. In addition Holonyak was a member of the U.S. Academies of Engineering and Sciences. Read the full story about Holonyak’s LED breakthrough in IEEE Spectrum.

  • Video Friday: StickBot
    by Evan Ackerman on 30. Septembra 2022. at 15:10

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. IROS 2022: 23–27 October 2022, KYOTO, JAPAN ANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELES CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND Enjoy today’s videos! From Devin Carroll, who brought us a robot made of ice, is a robot made of sticks. [ UPenn ] Amazon Astro can now check on your pets for you. Not sure how the pets feel about that, though. [ Amazon ] Soft robot hugs for everyone! [ Paper ] Scythe’s upgraded M.52 autonomous robotic mower can now handle more complex obstacles and terrain, with big enough batteries to behead grass all day long. [ Scythe ] Thanks, Jack! Agility CEO Damion Shelton and CTO Jonathan Hurst discuss artificial intelligence and its role in robot control. They also discuss the capability of robot learning paired with physics-based locomotion, Cassie setting a new world record using learned policies for control, and an exploration of the future of robotics through Dall-E. That new version of Digit is looking slick! [ Agility Robotics ] Intel gives an update on RealSense at a recent ROS Industrial meeting, and the part you’ll probably want to listen to starts at 3:50. [ ROS-I ] Local navigation and locomotion of legged robots are commonly split into separate modules. In this work, we propose to combine them by training an end-to-end policy with deep reinforcement learning. Training a policy in this way opens up a larger set of possible solutions, which allows the robot to learn more complex behaviors. That box climbing, right? [ RSL ] Neura Robotics is building a new humanoid. Most of this video is CG, but since there does appear to be a physical robot at the very end (albeit one that doesn’t do much), we’ll let it slide. [ Neura Robotics ] Dino Robotics will help you teach your robot to make hard-boiled eggs, which will make it a better chef than I am. [ Dino Robotics ] You know what sucks for robots? Lidar in blowing snow. [ Paper ] This research is banned in the United States. [ Shadow ] We often get asked about how Starship robots navigate around the community and those within, so we wanted to give a little insight—and some tips on what to do if you come across one on your journey. Have a look at how our robots navigate around various obstacles throughout their delivery journeys. [ Starship ] AIIRA’s vision is to create new AI-driven, predictive digital twins for modeling plants, and deploy them to increase the resiliency of the nation’s agricultural systems. [ AIIRA ] On 22 September 2022, Ryan Eustice of Toyota Research Institute talked to robotics students as a speaker in the Undergraduate Robotics Pathways & Careers Speaker Series, which aims to answer the question “What can I do with a robotics degree?” [ Michigan Robotics ] This Maryland Robotics Center Seminar is by Michael T. Tolley at University of California, San Diego, on biologically inspired soft mobile robots. [ UMD ]

  • The Electric Purple Snake-Oil Machine
    by Allison Marsh on 30. Septembra 2022. at 15:00

    The violet ray machine has an awesome name that conjures up images of cartoon supervillains taking out Gotham, but its actual history is even odder—and it includes a superhero, not a villain. The technology underpinning the machine begins with none other than Nikola Tesla and his eponymous coil. After Tesla and others made some refinements to the device, an influential clairvoyant named Edgar Cayce popularized violet ray machines for treating just about every kind of ailment—rheumatism and nervous conditions, acne and baldness, gonorrhea and prostate troubles, brain fog and writer’s cramp. Even Wonder Woman had her own health-restoring Purple Ray device. During the first half of the 20th century, a number of companies manufactured and sold the machines, which became ubiquitous for a time. And yet the scientific basis for the healing effects of violet rays was scant. So what accounted for their popularity? The cutting-edge tech of the violet ray machine Violet ray machines employ a Tesla coil, also known as a resonance transformer, to produce a high-frequency, low-current beam, which is then applied to the skin. Nikola Tesla kicked off this line of invention after traveling to Paris during the summer of 1889 to attend the Exposition Universelle. There he learned of Heinrich Hertz’s electromagnetic discoveries. Intrigued, Tesla returned to New York City to run some experiments of his own. The result was the Tesla coil, which he envisioned being used for wireless lighting and power. In April 1891, he applied for a U.S. patent for a “System of Electric Lighting,” which he received two months later. It would be the first in a series of related patents that spanned more than a decade. In May of that year, Tesla unveiled his wondrous invention to members of the American Institute of Electrical Engineers, during a lecture on his “Experiments with Alternate Currents of Very High Frequency and Their Application to Methods of Artificial Illumination.” He continued to test different circuit configurations and patented some (but not all) of his improvements, such as a “Means for Generating Electric Currents,” U.S. Patent No. 514,168. After more years of tinkering, Tesla perfected his resonance transformer and was granted U.S. Patent No. 1,119,732 for an “Apparatus for Transmitting Electrical Energy” on 1 December 1914. Nikola Tesla envisioned his eponymous coil being used for wireless lighting and power. It was also at the heart of the violet ray machine. Stocktrek Images/Getty Images Tesla promoted the medical use of the electromagnetic spectrum, suggesting to physicians that different voltages and currents could be used to treat a variety of conditions. His endorsement came at a time when trained doctors as well as shrewd hucksters were already experimenting with electrotherapy and ultraviolet light to help patients or to make a buck, depending on your perspective. The market was perfectly primed for the violet ray machine, in other words. Tesla himself never commercialized a medical device based around his coil, but others did. The French physician and electrophysiologist Jacques-Arsène d’Arsonval modified Tesla’s design to make the device safer for human use. It was further improved by another French doctor and electrotherapy researcher, Paul Marie Oudin. In 1893, Oudin crafted the first working prototype of what eventually became the violet ray machine. Four years later, Frederick Strong developed an American version. An influential clairvoyant named Edgar Cayce popularized violet ray machines for treating just about every kind of ailment—rheumatism and nervous conditions, acne and baldness, gonorrhea and prostate troubles, brain fog and writer’s cramp. Another charismatic individual gets credit for popularizing the device: the psychic Edgar Cayce. As a young adult, Cayce reportedly lost his voice for over a year. No doctor could cure him, and in desperation he underwent hypnosis. He not only regained the ability to speak, he also began suggesting medical advice and homeopathic remedies. Cayce, who claimed to have had visions from childhood, became a professional clairvoyant, and for the next 40 years he dispensed his wisdom through psychic readings. Out of more than 14,000 recorded readings, Cayce mentioned the violet ray machine almost 900 times. In case you doubt his status as an influencer, Cayce counted Thomas Edison, composer George Gershwin, and U.S. president Woodrow Wilson among his clients. Was there nothing the violet ray machine couldn’t cure? The popularity of violet ray machines exploded after 1915, once all of the components for a portable device could be easily manufactured. They could be plugged into a lamp or wall socket or wired to a battery—remember that most homes and businesses in the early 20th century were not yet electrified, and so most manufacturers offered both alternating and direct current options. The machine’s handheld wand consisted of a Tesla coil wrapped in an insulating material, such as Bakelite. The coil produced 1 to 2 kilovolts, which charged a condenser, and then discharged at a rate between 4 to 10 kilohertz when passed over the skin. A voltage selector controlled the intensity of the spark, creating anything from a mild sensation to something quite intense. This video shows the sparks coming from an antique machine: Glass electrodes—partially evacuated glass tubes known as Geissler tubes—could be inserted into the wand. These came in different shapes depending on their intended use. For example, a rake-shaped attachment worked to massage the scalp, while a narrow tube could be inserted into the mouth, nose, or another orifice. The high voltage ionized the gas within the glass tube, creating the purple glow that gave the device its name. Numerous manufacturers sprang up to produce the portable machines, including Detroit’s Renulife Electric Co. Founded by inventor James Henry Eastman in 1917, Renulife sold different models for different uses. According to company literature, Model M was its most popular general-purpose product, while Model D was for dentistry, and the tricked-out Model R [pictured at top] had finer regulation of current and a built-in ozone generator to help with head and lung congestion. In 1917, editors at the Journal of the American Medical Association reported that a violet ray generator certainly couldn’t treat “practically every ailment known to mankind,” as one manufacturer had claimed. Instructions for the violet ray machines manufactured by Charles A. Branston Ltd. contain an alphabetical list of disorders that could be treated, from abscess to writer’s cramp, with dozens of other ailments in between. Like the Renulife products, the Branston machines also came in different flavors. The Branston machine’s high-frequency mode had germicidal effects and purportedly could be used to cure infections as well as relieve pain. Sinusoidal mode was used to gently massage away nervousness and paralysis. Ozone mode was for inhaling, to treat lung disorders. The Branston devices ranged in price from US $30 for the Model 5B (high-frequency mode only) to $100 for the Model 29 (which had all three modes). The violet ray machines made by Charles A. Branston Ltd. had different modes for treating a wide variety of ailments.Historical Medical Library/College of Physicians of Philadelphia During the first half of the 20th century, manufacturers marketed the machines to doctors and consumers alike. By the time Wonder Woman debuted in her own comic book in June 1942, the violet ray machine was a well-known household technology. So it wasn’t too surprising that the superhero had a machine of her own. In the very first issue, Wonder Woman’s future love interest, Steve Trevor, is grievously injured in a plane crash. Seeking to cure his wounds, Diana works tirelessly for five days to complete her Purple Ray machine—but she’s too late. Trevor has died. Undeterred, Diana bathes her patient in the glowing light of the machine. The result might have embarrassed even the admen who wrote the promotional copy for Branston’s products: Wonder Woman’s Purple Ray brings Trevor back to life. Science frowns on the violet ray machine Despite their popularity, the machines didn’t fare quite as well within the medical establishment. In 1917, editors at the Journal of the American Medical Association reported that a violet ray generator certainly couldn’t treat “practically every ailment known to mankind,” as one manufacturer had claimed. Although the devices emitted a violet color, they were not in fact emitting ultraviolet light, or at least not in amounts that would be beneficial. In 1951, a Maryland district court ruled against a company named Master Appliances in a libel suit. The charge was misbranding, and the court found that the device was not an effective treatment nor capable of producing the claimed results. At the time, Master Appliances was one of the last manufacturers of violet ray machines in the United States, and the ruling effectively ended production in this country. And yet you can still buy violet ray machines today—both the antique variety and its modern equivalent. Today’s units are mainly marketed to aestheticians or sold for home use, and some dermatologists are not ready to categorically dismiss their benefits. Although they probably won’t cure indigestion or gray hair, the high frequency can dry out the skin and ozone does kill bacteria, so the machines may help treat acne and other skin conditions. Plus, there’s the placebo effect. As with all consumer electronics for which outrageous claims are made, let the buyer beware. Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the October 2022 print issue.

  • Taming the Climate Is Far Harder Than Getting People to the Moon
    by Vaclav Smil on 29. Septembra 2022. at 15:00

    In his 1949 book The Concept of Mind, Gilbert Ryle, an English philosopher, introduced the term “category mistake.” He gave the example of a visitor to the University of Oxford who sees colleges and a splendid library and then asks, “But where is the university?” The category mistake is obvious: A university is an institution, not a collection of buildings. Today, no category mistake is perhaps more consequential than the all-too-common view of the global energy transition. The error is to think of the transition as the discrete, well-bounded task of replacing carbon fuels by noncarbon alternatives. The apparent urgency of the transition leads to calls for confronting the challenge just as the United States dealt with two earlier ones: winning the nuclear-arms race against Nazi Germany and the space race against the Soviet Union. The Manhattan Project produced an atomic bomb in three years, and Project Apollo put two U.S. citizens on the moon in July 1969, eight years after President Kennedy had announced the goal. But difficult and costly as those two endeavors were, they affected only small parts of the economy, their costs were relatively modest, and the lives of average citizens were hardly affected. It is just the opposite for the decarbonization of the energy supply. Ours is an overwhelmingly fossil-fueled civilization, and the size and complexity of our extensive supersystem of fuel extraction, processing, distribution, storage, and conversion means that a complete displacement of it will directly affect every person and every industry, not least the growing of food and the long-distance transport of goods and people. The costs will be stupendous. Affluent nations would have to devote on the order of 15 to 20 percent of their annual economic product to the task of decarbonizing the economy. By the time the Manhattan Project ended in 1946, it had cost the country nearly US $2 billion, about $33 billion in today’s money, the total equal to only about 0.3 percent of the 1943-45 gross domestic product. When Project Apollo ended in 1972, it had cost about $26 billion, or $207 billion in today’s money; over 12 years it worked out annually to about 0.2 percent of the country’s 1961-72 GDP. Of course, nobody can provide a reliable account of the eventual cost of global energy transition because we do not know the ultimate composition of the new primary energy supply. Nor do we know what shares will come from converting natural renewable flows, whether we will use them to produce hydrogen or synthetic fuels, and the extent to which we will rely on nuclear fission (and, as some hope, on fusion) or from other, still unknown options. Chris Philpot; Sources: CTBTO Preparatory Commission; ScienceDirect; McKinsey Global Institute But a recent attempt to estimate such costs confirms the magnitude of the category mistake. The McKinsey Global Institute, in a highly conservative estimate, puts the cost at $275 trillion between 2021 and 2050. That is roughly $9.2 trillion a year, compared with the 2021 global economic product of $94 trillion. Such numbers imply an annual expenditure of about 10 percent of today’s world economic product. And because the world’s low-income countries could not carry such burdens, affluent nations would have to devote on the order of 15 to 20 percent of their annual economic product to the task. Such shares are comparable only to the spending that was required to win World War II. This article appears in the October 2022 print issue as “Decarbonization Is Our Greatest Challenge.”

  • Stephen Welby: A Man on a Mission
    by Kathy Pretz on 28. Septembra 2022. at 18:00

    In his five years as IEEE’s executive director and chief operating officer, Stephen Welby has led the organization through a global pandemic, a changing publishing landscape, and soaring geopolitical tensions. Welby, an IEEE Fellow, is leaving at the end of the year to spend more time with his family while he explores his career options. The IEEE Board of Directors has named Sophie Muirhead, IEEE's current general counsel and CCO, as his replacement. Welby directs the daily operations of IEEE and its approximately 1,000 employees. While the IEEE Board of Directors sets the organization’s policies and strategic direction, the executive director’s job is to implement them and provide input about issues affecting the organization’s future. “I feel comfortable with stepping out of my role at this time because I feel that, in almost every way, thanks to the hard work of volunteers and staff, IEEE is stronger than when I arrived,” Welby says. IEEE’s mission of advancing technology for the benefit of humanity drew him to the organization, he says, after he served a term in the Obama administration as U.S. assistant secretary of defense for research and engineering. About Stephen Welby Employer: IEEE Title: Executive Director and Chief Operating Officer Member grade: Fellow Alma mater: The Cooper Union, New York, N.Y. The IEEE position “offered an opportunity to influence and engage a global community,” Welby says. “The work that IEEE is doing is important, and the ideas embedded in our mission statement are big drivers for ensuring that we leave a better world to future generations. “IEEE has an ambitious agenda: supporting our members around the world; deepening our technical engagement in emerging areas; building communities and disseminating technical information; meeting a growing demand for technical education; exploring new frontiers of technical standards; and engaging with the broader public about the critical role that technical professionals play in building a better world.” A commitment to diversity and the open access publishing model Welby lists his most important accomplishments as helping to strengthen IEEE’s commitment to diversity, inclusion, and professional ethics; updating the organization’s publication model to increase its embrace of open access; and shoring up its financial footing. “IEEE is very considerate in trying to make sure that it is a voice for all of our members,” he says. “We’ve been taking steps to be as inclusive as possible and to find ways that people can participate and contribute—which ties back to our mission. We’ve got to find ways to be open, welcoming, and maybe more than encouraging to ensure that everybody can contribute to the best of their abilities.” During his tenure, the IEEE Board revised its policy on diversity to ensure that members have a safe, inclusive place for collegial discourse. Changes also were made to the IEEE Code of Ethics to focus members on key elements of the code, a commitment not to engage in harassment, and to protect the privacy of others. “The work that IEEE is doing is important, and the ideas embedded in our mission statement are big drivers for ensuring that we leave a better world to future generations.” When Welby began working for IEEE, he says, the open access publishing movement was perceived as a threat, but today he sees it as an opportunity. The movement calls for making research publications available without fees for the reader. “This requires a shift to new business models to cover expenses that were previously funded with subscription fees,” he says. “IEEE has been responsive to this change in a careful, deliberate, and responsible way. We are offering a wide variety of opportunities and tools for people to engage. The evolution of our publication activity has been about responding to community demands for new and diverse offerings.” Twenty open access journals have been launched in the past five years, for a total of 29 in the portfolio. Today they make up more than 20 percent of IEEE’s journal publishing output and that percentage is growing, Welby says. He says he is proud that IEEE is much stronger financially than when he started. In 2018, its total net assets were US $391 million, and by the end of last year it had more than $851 million, he says. He adds that he anticipates strong results for this year as well. “To support our mission in perpetuity, IEEE needs strong financial reserves—ideally reserves that are large enough to serve as reliable and stabilizing sources of revenue,” he says. “Doubling our reserves has been a significant accomplishment and represents an investment in our future and our long-term commitment to our mission.” Welby visiting IEEE's server room in its offices in Piscataway, New Jersey.Brad Trent Other achievements, he says, include the IEEE Standards Association’s facilitation of discussions about the responsible use of artificial intelligence. Another is the work that IEEE-USA has done in helping to shape public policy concerning technology. He points to major U.S. legislation passed this year: the Inflation Reduction Act, and the Creating Helpful Incentives for Producing Semiconductors for America and Foundries Act. Welby predicts that other countries will make similar investments, and he says he hopes IEEE will support those as well. “IEEE is in an interesting spot relative to public policy in the United States, Europe, and everywhere because we talk to the concerns of technologists, and about technology and how it impacts regulations and legislation,” he says. “We’re not arguing for a particular stakeholder; we’re arguing for investments in advancing technology. We’re engaged around the world in helping to ensure that research is adequately funded and that technical education gets the support it needs.” The impact of the pandemic and global tensions Leading a global organization that depends on in-person meetings and events has proven especially challenging in recent years. After the coronavirus pandemic began spreading in 2020, the organization suddenly shifted its business model from building communities through in-person engagement to relying almost exclusively on digital delivery, Welby says. “IEEE has a reputation for being a conservative organization, for being slow to change, and sometimes for being overly rule-bound and bureaucratic,” he says. “But in the face of a global crisis, we adapted quickly. Staff and volunteers worked closely together to redesign and redeploy IEEE activities in a manner that allowed our communities to remain connected even while physically distancing themselves for safety. There was great trust, great collaboration, and great success. “The ability to respond, to change, to evolve, and to adapt is one of IEEE’s great strengths,” he adds. “That makes me feel it will be ready to take on whatever the future brings and deliver on our mission.” Geopolitical tensions in recent years have impacted IEEE, he notes. Conflict among countries, trade restrictions, and proliferating international sanctions among nations have created new challenges for IEEE activities, he says. “We have worked hard to stay focused on IEEE’s role in supporting an international community of engineers and technical professionals that spans national borders and supports a global commitment to expanding the scientific and technical knowledge of humankind,” he says. “But IEEE also must operate in compliance with applicable laws and regulations while seeking clarification of those that appear misapplied in the IEEE context. At the same time, we are sympathetic to the hardships that international tension can place on our members.” Coordinating the enormous scope and scale of IEEE activities is complex. IEEE operates in 160 countries; has 46 societies and councils; has 343 sections in 10 geographic regions; maintains a portfolio of more than 1,075 technical standards; publishes 200 transactions, journals, and magazines; and holds more than 1,900 conferences and events. “Trying to maintain coordination and synchronization across this enormous portfolio, managing risk, and prioritizing resources is a continuous challenge,” Welby says. He says his one regret is that more progress has not been made in evolving IEEE’s membership model. The model has remained relatively unchanged since the organization was established in 1963. He points out that overall membership growth has largely flattened, despite the increased role that technology plays in society. Across IEEE there has been a decline in the share of members working in industry, while at the same time there has been recent strong growth in student membership. “We continue to rely on face-to-face interaction for many of our core membership activities, despite the growth of online, virtual, and digital communities,” Welby says. “Prospective volunteers are working longer hours [and] have family commitments. Plus, there are other activities competing for their time. The time and energy commitments that IEEE has traditionally demanded of its volunteers may not be a viable strategy in coming decades. It may be time to fundamentally rethink the structure of IEEE membership and explore different ways to engage our members.” He notes that those concerns have been studied and debated, and initiatives have been undertaken to explore alternative membership models. But, he says, there remains more work to do to develop consensus on the future of membership. Meeting with industry leaders and engineering students Welby says he’ll miss the concentrated chaos of an IEEE Board meeting series, where many decisions—great and small—are debated and decided in a short period of time. He’ll also miss talking to audiences of thousands of people about important technical topics, he says, and meeting with leaders from government and industry. “I have had the opportunity to see the amazing diversity of our membership,” he says, “and the fantastic work that they do, and their impact on the world. I spent a lot of time talking to as many volunteers in different roles as I could. I have particularly enjoyed engaging with students around the world and seeing their enthusiasm, their creativity, and their optimism. They give me great hope for our future.” Welby adds that he could not have accomplished as much as he did without the support and confidence of the IEEE presidents and boards he served, the thousands of IEEE volunteers, his senior management team, and the professional staff members around the world “who brought their creativity, commitment, and technical skills to every task.” “I will leave it to others to assess how effective I have been,” he says. “But I have woken up every day for the last five years thinking about what I could do to improve and advance IEEE’s mission.”

  • Robo-Ostrich Sprints to 100-meter World Record
    by Evan Ackerman on 28. Septembra 2022. at 16:36

    For a robot that shares a leg design with the fastest-running bird on the planet, we haven’t ever really gotten a sense of how fast Agility Robotics’ Cassie is actually able to move. Oregon State University’s Cassie successfully ran a 5k last year, but it was the sort of gait that we’ve come to expect from humanoid robots—more of a jog, really, with measured steps that didn’t inspire a lot of confidence in higher speeds. Turns out, Cassie was just holding back, because she’s just sprinted her way to a Guinness World Record for fastest 100-meter run by a bipedal robot. Cassie’s average speed was just over 4 meters per second, completing the 100 meters in 24.73 seconds. And for a conventional1 bipedal robot, that is fast. Moreover, her top speed was certainly higher than 4 m/s, since the record attempt required a standing start (along with a return to the starting point without falling over). This is also by far the most ostrichlike I’ve ever seen Cassie move, with a springy birdlike gait. With a feathery costume on, the robot would be a dead ringer for the real bird, and it would give Cassie something to aspire to, since a real ostrich can run the 100-meter in 5 seconds flat. This was not an autonomous run, since this version of Cassie has no external sensors, and there was a human with a remote doing the steering. OSU’s Dynamic Robotics Laboratory has been working on this kind of dynamic movement for a while, but the sprinting in particular required some extra training in the form of gait optimization in simulation. And according to the researchers, one of the most difficult challenges was actually getting Cassie to reach a sprint from a standing start and then slow down to a stop on the other end without borking herself. “This may be the first bipedal robot to learn to run, but it won’t be the last,” Agility Robotics’ Jonathan Hurst said. “I believe control approaches like this are going to be a huge part of the future of robotics. The exciting part of this race is the potential. Using learned policies for robot control is a very new field, and this 100-meter dash is showing better performance than other control methods. I think progress is going to accelerate from here.” I certainly hope that this won’t be the last bipedal robot to learn to run, because I would pay money to attend a live bipedal robot race. 1Arguably, the fastest bipedal legged robot was probably the OutRunner—depending on what you decide counts as “legged” and “bipedal,” although it would not have qualified for this particular record due to its difficulty with starting and stopping.

  • Evolution of In-Vehicle Networks
    by Rohde & Schwarz on 28. Septembra 2022. at 16:15

    Developments in Advanced Driver-Assistance Systems (ADAS) are creating a new approach to In-Vehicle Network (IVN) architecture design. With today's vehicles containing at least a hundred ECUs, the current distributed network architecture has reached the limit of its capabilities. The automotive industry is now focusing on a domain or zonal controller architecture to simplify network design, reduce weight & cost and maximize performance. Download this free poster now! A domain controller can replace the functions of many ECUs to enable high-speed communications, sensor fusion and decision-making, as well as supporting high speed interfaces for cameras, radar and LiDAR sensors. This poster graphically represents the development of IVNs from the past to the present and future then provides guidance on how to test them.

  • iRobot Crams Mop and Vacuum Into Newest Roomba
    by Evan Ackerman on 27. Septembra 2022. at 04:08

    Robots tend to do best when you optimize them for a single, specific task. This is especially true for home robots, which need to be low cost(ish) as well as robust enough to be effective in whatever home they find themselves in. iRobot has had this formula pretty well nailed down with its family of vacuuming robots for nearly two decades, but they’ve also had another family of floor care robots that have been somewhat neglected recently: mopping robots. Today, iRobot is announcing the US $1,100 Roomba Combo j7+, which stuffs both a dry vacuum and a wet mop into the body of a Roomba j7. While very much not the first or only combo floor-cleaning robot on the market, the Combo j7+ uses a unique and very satisfying mechanical system to make sure that your carpets stay clean and dry while giving your hard floors the moist buffing that they so desperately need. While iRobot is now best known for its vacuums, a decade ago Scooba floor cleaning robots were right up there with Roombas as a focus for the company, featuring tanks for cleaning solution and dirty water and combining vacuuming and scrubbing for non-carpeted floors. They were impressive robots, but they were quite expensive and relatively labor-intensive for users, since you needed to fill and empty them with every cycle. iRobot eventually phased out the Scooba for the Braava, which used mopping pads instead of a vacuuming system and was way cheaper. But Braavas aren’t vacuums, meaning that they work best when the floor that they’re supposed to clean is vacuumed first. You can coordinate a Braava with a Roomba to do exactly that, but it’s perhaps not the most elegant way of doing things, even if it does allow you to keep your robots well-optimized for single tasks. And the Braava is showing its age, without any of the clever sensing that the newest Roombas use to intelligently navigate around your home and life. The Combo j7+ essentially combines the mopping capabilities of a Braava with the vacuuming smarts of a Roomba j7. Frankly, I have no idea why iRobot didn’t name this thing something a bit more distinctive, because the regular j7 Roomba still exists, and the “+” simply refers to the fact that it comes with a self-emptying Clean Base. The “Combo j7+”, meanwhile, has completely different hardware taking up the back half of the robot, and isn’t (as the name sort of implies) a regular j7 with an add-on or something like that. Anyway, for brevity, I’m just going to call this new combo Roomba the j7C until iRobot comes up with something better. Now, the j7C is absolutely not the first mop/vacuum robot out there, but the way most others tackle the transition from hard floors (mop appropriate) and carpet (not mop appropriate) is to raise the mopping attachment up into the body of the robot a couple of millimeters when on carpet to keep the mop from dragging across the carpet and making it wet and gross. This is certainly better than not lifting the mop up at all, but iRobot says that it still “paints” wet drippy drops all over the place. Not ideal. “Why would someone sell a carpet-painting robot that applies mud to your carpet in a systematic fashion? It’s a fundamentally flawed concept that doesn’t work. iRobot, being who we are, said “here’s an impossible challenge, let’s go do it because it should be fun.” —Colin Angle, iRobot CEO iRobot’s solution is, honestly, more complex than I would have thought to be practical for a Roomba—the mop pad is attached to the robot through two actuated metal arms, which can move the entire thing from the bottom of the robot to the top, placing the body of the robot in between any droppy drips and the carpet: Cool, right? There are belt drives in there to move the arms, making sure that the motion is both smooth enough and powerful enough to exert adequate pressure on the floor when the pad is under the robot. And if you look closely, you’ll see the skins on the sides of the robot open out slightly to give the arms space to move up and down. The j7C vacuums and mops in one single pass. On hard floors, water (or cleaning solution) is continuously sprayed underneath the robot from ports just behind the vacuuming system, and the mopping pad wipes it right up. When the robot detects carpet (which it does through ultrasonic sensors, not visually like the animation suggests), it pauses to lift the pad up, and then vacuums the carpet just like a regular j7 Roomba. You can remove and clean the pad of course, which is made easier to do since you don’t have to get under the robot to do it. Just aft of the brushes, three black nozzles dispense water in advance of the mopping pad. iRobot Since all of this happens in the same footprint as the original j7, there are some compromises to make room for the mopping system. This mostly happens with the bin, which now accommodates a water reservoir, taking up some of the space where you’d otherwise find dirt. It’s not a huge deal, though, because Roombas with the automatic Clean Base (including the j7C) will empty their own bins when they get full and then resume vacuuming where they left off. So, that may happen an extra time or two during the j7C’s cleaning cycle, but it’s not something the user has to worry about. What the user does have to worry about, unfortunately, is the water reservoir. You access it by removing the j7C’s bin, and then you can fill the reservoir at a sink. One fill is enough for a thousand square feet of coverage on eco-mode, and there’s also normal mode and a double-pass mode that you can select in the app for dirtier floors. This water-filling process is easy, but it’s a user-dependent step, which sadly breaks the magic of the automatic dirt-emptying Roombas where you basically don’t have to think about them for weeks at a time. Water isn’t required, at least, and if the robot detects an empty reservoir, it’ll just default to vacuuming everything instead of mopping. Other companies have approached this problem with docks that include water reservoirs able to automatically refill a robot multiple times, and I have some faith that iRobot is already working on a way of doing this more elegantly. Personally, I’m hoping for a water shuttle robot: some little bot with a small tank that zips back and forth between the dock and, say, a water dispenser hooked into the toilet fill line in your bathroom to provide refills on demand. Q&A With iRobot CEO Colin Angle We asked iRobot CEO Colin Angle to explain why the company feels that this is the right approach to a hybrid mopping and vacuuming robot, and how the heck it came up with this system in the first place. One of the greatest things about the Roomba is that it does one thing very well. Often with robots, trying to make more of a generalist robot results in significant compromises. What made you decide to shift from dedicated vacuuming and mopping robots, to one robot that does both tasks? Angle: The short answer is that we finally figured out how to do it. We’re not the first company with a two-in-one robot on the market. What took us so long? Well, it took us this long to solve the problem. There’s a very interesting history of mopping robots at iRobot, where we had the Scooba that put down water, scrubbed, and vacuumed the water up again. And then we started getting into mopping pads with the Scooba 230 and the Braava. Although we had a lot of skepticism about the idea of capturing the water and dirt in the pad and we didn’t know how to do it at the beginning, it proved to be a very successful strategy. Then the question was, could we figure out a mechanism that would allow edge-to-edge cleaning with a mop on a Roomba platform? Our early attempts were not successful, and that purist view of keeping the vacuuming and mopping robots separate held for a long time, but we recognized that the convenience of true on-the-fly switching would be a real and tangible customer benefit and would allow us to save on costs because we wouldn’t be duplicating much of the hardware across two separate robots.” So how did you end up at the solution of moving the mopping pad all the way from the bottom of the robot to the top? Angle: It’s certainly not where we went at first! In the annals of iRobot history are dozens of flawed ideas around how we could do this, actually dating back to some of the original designs for the original Roomba, because it was always something we’d been thinking about. The idea of having something on the bottom and pulling a plastic screen over the mopping pad got real consideration, but there are some bad failure modes there. The guy who came up with this is one of our principal engineers, who has been with iRobot maybe the longest besides me at this point. I convinced him to join the company from [the Jet Propulsion Laboratory]. He’s the one who said, “Well, why don’t we just use a belt drive and arms?” and everyone looked at him like, “Are you insane?” And so he built it, and proved to us that it could work, which is his normal way of convincing us that he’s right and we’re wrong. And it’s brilliant! It sounds like a crazy approach to solving the problem, but when you see it, it makes sense. iRobot is also announcing some hefty software updates in the form of iRobot OS 5.0. The j7 Roombas have front-facing cameras that are able to do all sorts of things, and last we checked, that included identifying and avoiding four different classes of floor-dwelling objects. iRobot OS 5.0 brings that number up to 80 [!], and obstacles now trigger different behaviors besides avoidance. Litter boxes and pet bowls, for example, can be given special attention because they tend to be dirtier areas. Same with toilets, dishwashers, and ovens. With a voice assistant, you can now also yell at your robot to skip the room that you’re in, and it’ll come back to it later, which is great for those of us who feel like our robots actively seek us out whenever they have cleaning to do. iRobot iRobot told us that its j7 vacuums have received the TÜV SÜD Cyber Security Mark, a stringent third-party endorsement of iRobot’s security practices, meaning that iRobot has invested heavily in making sure that the data that it has in its possession is kept safe from external hackers. This is good, for sure, but frankly I don’t get the sense that folks are nearly as worried about their data getting stolen from iRobot by hackers as they are about their data getting intentionally leveraged by iRobot (or iRobot’s future owner) in a way that is contrary to users’ interests, although iRobot has promised that it will never sell your data to other companies. Until we have a better idea of what exactly is happening with the Amazon merger, it’s probably best to remain cautious. We did ask Colin Angle what the options are if you want to keep your data completely private while still using your Roomba, and here’s what he told us: “It depends on what your comfort level is. You don’t have to turn on mapping, and you certainly don’t have to ever store or share any image from your home. If you want to clean by room, we need to remember where your rooms are. We’re going to remember them as polygonal objects—we’re not going to have any idea what they look like. We are really trying to make sure that we only store that data that the robot actually needs to do the job. If you don’t want to clean by room or build a map, you can still have the robot operate and switch between modes and benefit from the avoidance technologies and do the right things. But there are different levels that will hopefully satisfy most different levels of concern.” We also learned that iRobot’s top-of-the-line s9, which features a decidedly non-Roomba like square front plus a 3D sensor, is not the direction that iRobot will be moving in. Historically, iRobot has released premium Roombas like the s9, and then the tech in them trickles down into less expensive robots over time. But it sounds like the s9, while still iRobot’s most powerful vacuum, couldn’t justify its fancy and expensive sensor or nonround form factor to the extent that would be necessary to influence future generations of Roombas. “This is all a journey,” Angle told us. “With the costs inherent in building the s9 robot, we felt like we could go another way and put more CPU power in and really adopt an architecture around computer vision that would be more flexible than the dense 3D point-cloud sensor in the s9. The technology is moving so fast on the visual understanding and machine learning side, that it’s a better long-term bet to get behind. 3D sensing will come back, but it may come back as depth from vision. And the square front has some advantages—speed of clean is a benefit, but improved mission completion [of round Roombas] is a bigger benefit. I think the architecture of the J series robot is the go-forward architecture.” iRobot Angle acknowledges that the architecture of the J series, and of the j7C in particular, makes it a high-end robot, and you could buy a Roomba and a Braava together for much less money. But this is how iRobot does things—offering premium robots with new features and capabilities for high prices, and eventually we’ll see the costs come down in the form of more affordable generations of robot. “This is definitely an exciting path forward for us,” says Angle. The iRobot Roomba Combo j7+ is available for preorder now for $1,099 in the United States, and will be available in Canada and Europe in early October.

  • Converting Coal Power Plants to Nuclear Gains Steam
    by Rahul Rao on 26. Septembra 2022. at 17:00

    On a planet aspiring to become carbon neutral, the once-stalwart coal power plant is an emerging anachronism. It is true that, in much of the developing world, coal-fired capacity continues to grow. But in every corner of the globe, political and financial pressures are mounting to bury coal in the past. In the United States, coal’s share of electricity generation has plummeted since its early 2000s peak; 28 percent of U.S. coal plants are planned to shutter by 2035. As coal plants close, they leave behind empty building shells and scores of lost jobs. Some analysts have proposed a solution that, on the surface, seems almost too elegant: turning old coal plants into nuclear power plants. On 13 September, the U.S. Department of Energy (DOE) released a report suggesting that, in theory, over 300 former and present coal power plants could be converted to nuclear. Such a conversion has never been done, but the report is another sign that the idea is gaining momentum—if with the slow steps of a baby needing decades to learn to walk. “A lot of communities that may have not traditionally been looking at advanced nuclear, or nuclear energy in general, are now being incentivized to look at it,” says Victor Ibarra Jr., an analyst at the Nuclear Innovation Alliance think tank, who wasn’t involved with the DOE study. Conversion backers say the process has benefits for everybody involved. Plant operators might save on costs, with transmission lines, cooling towers, office buildings, and roads already in place. Once-coal-dependent communities might gain jobs and far better air quality. “I think it’s something that people have been talking about for a while,” says Patrick White, project manager at the Nuclear Innovation Alliance. DOE analysts screened 349 retired and 273 still-operating coal-plant sites across the United States. They filtered out sites that were retired earlier than 2012, sites that weren’t operated by utilities, and sites deemed unsuitable for nuclear reactors (such as plants in disaster-prone or high-population-density areas). That left 157 recently retired and 237 operating sites that could—in theory—house nuclear reactors. Not all of these remaining coal plants are perfect fits, however. Most nuclear plants around the world today are large light-water reactors, with capacities well over a gigawatt—quite a bit more than typical coal plants. Large reactors need consistent and prolific water sources to cool themselves, something not every old coal plant can provide. DOE analysts flagged only 35 recently retired and 96 operating coal sites that could house a large light-water reactor within half a mile. But in the future, not all reactors might be so large. Many still-speculative small modular reactor designs might deliver just a few hundred megawatts. (In Hainan, China, Linglong One—the world’s first small modular reactor plant—is now under construction.) Depending on the design, these could be cooled with less water or even air, making them far more feasible fits for coal sites. DOE analysts found 125 recently retired and 190 operating sites that could house such small reactors. Either option will be an uphill battle. In the United States, any new reactor must gain the blessing of the federal Nuclear Regulatory Commission (NRC), a process that can take up to five years and drive up costs in a sector already facing rising prices. Only one nuclear power plant is currently under construction in the United States, in eastern Georgia. A specific challenge would-be-conversions must face is that the NRC’s standards—both for atmospheric pollution and for the amount of radiological material a reactor can release—are much tighter than federal standards for coal plants. On the state level, no fewer than 12—California, Connecticut, Hawaii, Illinois, Maine, Massachusetts, Minnesota, New Jersey, New York, Oregon, Rhode Island, and Vermont—all have their own conditions restricting new nuclear construction. Even if regulations didn’t stand in the way, coal-to-nuclear conversion has never been done. However, there is one project that has made some headway. In Kemmerer, Wyo., nestled in the foothills of the Rocky Mountains, the nuclear energy firm TerraPower plans to retrofit an existing coal plant with a sodium fast reactor. The firm is planning to start building its reactor around 2026, hoping to deliver power by decade’s end. Even so, it hasn’t attained regulatory approval just yet. If Wyoming will be the first, there are signs that it won’t be the last. In neighboring Montana, state legislators recently approved a study for converting one coal plant to nuclear. That plant, situated in the coal mining town of Colstrip, currently faces its imminent end as nearby Oregon and Washington plan to ban coal power by 2025. In West Virginia, once coal’s citadel, the state government eliminated its old ban on nuclear power plants. Nationally, the recently enacted Inflation Reduction Act offers tax credits for nuclear projects in communities with retiring coal plants—something that will certainly increase interest in conversions. “Are all of these sites going to get nuclear power plants? Probably not,” says White. “But is this a really good way for people to start the conversation on what are potential next steps, and where are potential sites to look at it? I think that’s a really cool opportunity.”

  • Government’s Role in Further Developing 5G Technologies and Future Networks
    by Anritsu on 26. Septembra 2022. at 13:00

    Similar to 5G, many key technology developments have been supported by government funding and institutions for years. Federally Funded Research and Development Centers (FFRDC) and University Affiliated Research Centers (UARC) have been foundational elements of this government support. This webinar will examine the impact this research has had on national security and the overall economy. It will also discuss how these institutions have made use of testing to attain these security and economic goals. Finally, it will look at how these resources can continue to be leveraged in the further development of 5G technologies and beyond for future networks. Register now for this webinar! Speaker: MURAT TORLAK Program Director, Computer and Network Systems (CNS) Murat Torlak received the M.S. and Ph.D. degrees in electrical engineering from The University of Texas at Austin, in 1995 and 1999, respectively. Since August 1999, he has been with the Department of Electrical and Computer Engineering, The University of Texas at Dallas, where he has been promoted to the rank of a Full Professor. He is on leave from UT Dallas to serve as a Rotating Program Director at the U.S. National Science Foundation (NSF) since mid 2020. His current research interests include experimental verification of wireless networking systems, cognitive radios, millimeter-wave automotive radars, millimeter-wave imaging systems, and interference mitigation in radio telescopes. He has served as an Associate Editor for the IEEE Transactions on Wireless Communications, from 2008 to 2013. He was a Guest co-Editor of IEEE JSTSP Special Issue on Recent Advances in Automotive Radar Signal Processing, 2021. He has also served in several IEEE committees and helped to organized IEEE conferences.

  • Video Friday: Humans Helping Robots
    by Evan Ackerman on 23. Septembra 2022. at 18:05

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. IROS 2022: 23–27 October 2022, KYOTO, JAPAN ANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELES CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND Enjoy today’s videos! Until robots achieve 100 percent autonomy (HA), humans are going to need to step in from time to time, and Contoro is developing a system for intuitive, remote human intervention. [ Contoro ] Thanks, Youngmok! A one year update of our ongoing project with Ontario Power Generation (OPG) and RMUS Canada to investigate the capabilities of Boston Dynamics’ Spot robot for autonomous inspection and first response in the power sector. Highlights of the first year of the project, featuring the work of Ph.D. student Christopher Baird, include autonomous elevator riding and autonomous door opening (including proxy card access doors) as part of Autowalks, as well as autonomous firefighting. [ MARS Lab ] Teams involved in DARPA’s Robotic Autonomy in Complex Environments with Resiliency (RACER) program have one experiment under their belts and will focus on even more difficult off-road landscapes at Camp Roberts, California, September 15–27. The program aims to give driverless combat vehicles off-road autonomy while traveling at speeds that keep pace with those driven by people in realistic situations. [ DARPA ] Tool use has long been a hallmark of human intelligence, as well as a practical problem to solve for a vast array of robotic applications. But machines are still wonky at exerting just the right amount of force to control tools that aren’t rigidly attached to their hands. To manipulate said tools more robustly, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), in collaboration with the Toyota Research Institute (TRI), have designed a system that can grasp tools and apply the appropriate amount of force for a given task, like squeegeeing up liquid or writing out a word with a pen. [ MIT ] Cornell researchers installed electronic “brains” on solar-powered robots that are 100 to 250 micrometers in size, so the tiny bots can walk autonomously without being externally controlled. [ Cornell ] Researchers at the University of California, San Diego, have developed soft devices containing algae that glow in the dark when experiencing mechanical stress, such as being squished, stretched, twisted or bent. The devices do not require any electronics to light up, making them an ideal choice for building soft robots that explore the deep sea and other dark environments, researchers said. [ UCSD ] Thanks, Liezel! Our robotaxi is built to withstand a range of temperatures to ensure that the vehicle, and most importantly, its riders are never too hot or too cold...no matter the weather. Learn more about our thermal testing in the latest episode of Putting Zoox to the Test. [ Zoox ] Thanks, Whitney! Skydio drones will do an excellent job of keeping you in frame, whatever happens. [ Skydio ] With the accelerated urbanization in the world, the development and utilization of the underground space are important for economic and social development and the survival of people’s lives is important for all of us. Zhejiang University Huzhou Research Institute convened a robot team to conduct an underground space unknown environment exploration adventure in Yellow dragon cave. DEEP Robotics participate in this fascinated robot party and try out the underground challenges, also team up with the drone team (air-ground robot) to seek new collaboration. [ Deep Robotics ] The title of this video is “Ion Propulsion Drone Proves Its Commercial Viability,” but it seems like quite a leap from a 4.5-minute flight to reaching the 15-minute flight with a significant payload that would be required for last-mile delivery. [ Undefined Technologies ] Welcome to this week’s edition of “How much stuff can you cram onto a Husky?” [ Clearpath ] In the Nanocopter AI challenge the teams demonstrated the AI they developed for Bitcraze AB’s Crazyflie nanocopters to perform vision-based obstacle avoidance at increasing speeds. The drones flew around in our “Cyberzoo,” avoiding a range of obstacles, from walls to poles and artificial plants. The drones were primarily scored on the distance they covered in the limited time but could gain extra points when flying also through gates. [ IMAV ] Watch this drone deliver six eggs to an empty field! Sorry, I shouldn’t be so snarky, but I’m still not sold on the whole urban drone delivery of groceries thing. [ Wing ] Flexiv is pleased to announce the launch of its ROS 2 driver to bring a better robot development experience for customers. [ Flexiv ] Northrop Grumman has been pioneering new capabilities in the undersea domain for more than 50 years. Manta Ray, a new unmanned underwater vehicle, taking its name from the massive “winged” fish, will need to be able to operate on long-duration, long-range missions in ocean environments without need for on-site human logistics support—a unique but important mission needed to address the complex nature of undersea warfare. [ Northrop Grumman ] Some unique footage from drones that aren’t scared of getting a little wet. [ Blastr ] People tend to overtrust sophisticated computing devices, especially those powered by AI. As these systems become more fully interactive with humans during the performance of day-to-day activities, ethical considerations in deploying these systems must be more carefully investigated. In this talk, we will discuss various forms of human overtrust with respect to these intelligent machines and possible ways to mitigate the impact of bias in our interactions with them. [ Columbia ] The Jet Propulsion Laboratory’s success in landing the low-cost Mars Pathfinder mission in 1997 was viewed as proof that spacecraft could be built more often and for far less money—a radical cultural change NASA termed “Faster, Better, Cheaper.” The next challenge taken on by JPL was to fly two missions to Mars for the price of the single Pathfinder mission. Mars Climate Orbiter and the Mars Polar Lander both made it to the launchpad, on time and on budget, but were lost upon arrival at Mars, resulting in one of the most difficult periods in the history of JPL. “The Breaking Point” tells the story of the demise of these two missions and the abrupt end of NASA’s “Faster, Better, Cheaper” era. [ JPL ]

  • NASA’s DART Mission Aims to Save the World
    by Ned Potter on 23. Septembra 2022. at 15:52

    Armageddon ruined everything. Armageddon—the 1998 movie, not the mythical battlefield—told the story of an asteroid headed straight for Earth, and a bunch of swaggering roughnecks sent in space shuttles to blow it up with a nuclear weapon. “Armageddon is big and noisy and stupid and shameless, and it’s going to be huge at the box office,” wrote Jay Carr of the Boston Globe. Carr was right—the film was the year’s second biggest hit (after Titanic)—and ever since, scientists have had to explain, patiently, that cluttering space with radioactive debris may not be the best way to protect ourselves. NASA is now trying a slightly less dramatic approach with a robotic mission called DART—short for Double Asteroid Redirection Test. On Monday at 7:14 p.m. EDT, if all goes well, the little spacecraft will crash into an asteroid called Dimorphos, about 11 million kilometers from Earth. Dimorphos is about 160 meters across, and orbits a 780-meter asteroid, 65803 Didymos. NASA TV plans to cover it live. DART’s end will be violent, but not blockbuster-movie-violent. Music won’t swell and girlfriends back on Earth won’t swoon. Mission managers hope the spacecraft, with a mass of about 600 kilograms, hitting at 22,000 km/h, will nudge the asteroid slightly in its orbit, just enough to prove that it’s technologically possible in case a future asteroid has Earth in its crosshairs. “Maybe once a century or so, there’ll be an asteroid sizeable enough that we’d like to certainly know, ahead of time, if it was going to impact,” says Lindley Johnson, who has the title of planetary defense officer at NASA. “If you just take a hair off the orbital velocity, you’ve changed the orbit of the asteroid so that what would have been impact three or four years down the road is now a complete miss.” So take that, Hollywood! If DART succeeds, it will show there are better fuels to protect Earth than testosterone. The risk of a comet or asteroid that wipes out civilization is really very small, but large enough that policymakers take it seriously. NASA, ordered by the U.S. Congress in 2005 to scan the inner solar system for hazards, has found nearly 900 so-called NEOs—near-Earth objects—at least a kilometer across, more than 95 percent of all in that size range that probably exist. It has plotted their orbits far into the future, and none of them stand more than a fraction of a percent chance of hitting Earth in this millennium. The DART spacecraft should crash into the asteroid Dimorphos and slow it in its orbit around the larger asteroid Didymos. The LICIACube cubesat will fly in formation to take images of the impact.Johns Hopkins APL/NASA But there are smaller NEOs, perhaps 140 meters or more in diameter, too small to end civilization but large enough to cause mass destruction if they hit a populated area. There may be 25,000 that come within 50 million km of Earth’s orbit, and NASA estimates telescopes have only found about 40 percent of them. That’s why scientists want to expand the search for them and have good ways to deal with them if necessary. DART is the first test. NASA takes pains to say this is a low-risk mission. Didymos and Dimorphos never cross Earth’s orbit, and computer simulations show that no matter where or how hard DART hits, it cannot possibly divert either one enough to put Earth in danger. Scientists want to see if DART can alter Dimorphos’s speed by perhaps a few centimeters per second. The DART spacecraft, a 1-meter cube with two long solar panels, is elegantly simple, equipped with a telescope called DRACO, hydrazine maneuvering thrusters, a xenon-fueled ion engine and a navigation system called SMART Nav. It was launched by a SpaceX rocket in November. About 4 hours and 90,000 km before the hoped-for impact, SMART Nav will take over control of the spacecraft, using optical images from the telescope. Didymos, the larger object, should be a point of light by then; Dimorphos, the intended target, will probably not appear as more than one pixel until about 50 minutes before impact. DART will send one image per second back to Earth, but the spacecraft is autonomous; signals from the ground, 38 light-seconds away, would be useless for steering as the ship races in. The DART spacecraft separated from its SpaceX Falcon 9 launch vehicle, 55 minutes after liftoff from Vandenberg Space Force Base, in California, 24 November 2021. In this image from the rocket, the spacecraft had not yet unfurled its solar panels.NASA What’s more, nobody knows the shape or consistency of little Dimorphos. Is it a solid boulder or a loose cluster of rubble? Is it smooth or craggy, round or elongated? “We’re trying to hit the center,” says Evan Smith, the deputy mission systems engineer at the Johns Hopkins Applied Physics Laboratory, which is running DART. “We don’t want to overcorrect for some mountain or crater on one side that’s throwing an odd shadow or something.” So on final approach, DART will cover 800 km without any steering. Thruster firings could blur the last images of Dimorphos’s surface, which scientists want to study. Impact should be imaged from about 50 km away by an Italian-made minisatellite, called LICIACube, which DART released two weeks ago. “In the minutes following impact, I know everybody is going be high fiving on the engineering side,” said Tom Statler, DART’s program scientist at NASA, “but I’m going be imagining all the cool stuff that is actually going on on the asteroid, with a crater being dug and ejecta being blasted off.” There is, of course, a possibility that DART will miss, in which case there should be enough fuel on board to allow engineers to go after a backup target. But an advantage of the Didymos-Dimorphos pair is that it should help in calculating how much effect the impact had. Telescopes on Earth (plus the Hubble and Webb space telescopes) may struggle to measure infinitesimal changes in the orbit of Dimorphos around the sun; it should be easier to see how much its orbit around Didymos is affected. The simplest measurement may be of the changing brightness of the double asteroid, as Dimorphos moves in front of or behind its partner, perhaps more quickly or slowly than it did before impact. “We are moving an asteroid,” said Statler. “We are changing the motion of a natural celestial body in space. Humanity’s never done that before.”

  • Full-Wave EM Simulations: Electrically Large Antenna Placement and RCS Scenarios
    by WIPL-D on 23. Septembra 2022. at 15:52

    Handling various complex simulation scenarios with a single simulation method is a rather challenging task for any software suite. We will show you how our software, based on Method-of-Moments, can analyze several scenarios including complicated and electrically large models (for instance, antenna placement and RCS) using desktop workstations. Download this free whitepaper now!

  • Circle of Circuits
    by Willie Jones on 21. Septembra 2022. at 15:00

    The Big Picture features technology through the lens of photographers. Every month, IEEE Spectrum selects the most stunning technology images recently captured by photographers around the world. We choose images that reflect an important advance, or a trend, or that are just mesmerizing to look at. We feature all images on our site, and one also appears on our monthly print edition. Enjoy the latest images, and if you have suggestions, leave a comment below. RoboCup Class Picture Have you ever been awed by the pageantry of the parade of nations in the opening ceremony of the Olympic Games? Then this photo, featuring more than 100 Nao programmable educational robots, two Pepper humanoid assistive robots, and their human handlers, should leave you similarly amazed. It was taken at the end of this year’s RoboCup 2022 in Bangkok. After two years during which the RoboCup was scuttled by the global pandemic, the organizers were able to bring together 13 robot teams from around the world (with three teams joining in remotely) to participate in the automaton games. The spirit of the gathering was captured in this image, which, according to RoboCup organizers, shows robots with a combined market value of roughly US $1 million. Patrick Göttsch and Thomas Reinhardt Longest-Distance Calls When you’re traveling to faraway destinations, it’s comforting to know that you can remain in contact with the folks back home, no matter how far you roam. Still, it’s surprisingly easy to end up somewhere that has poor cellular reception or none to speak of. That’s because only about 10 percent of the world’s surface is in cellular coverage zones. But in April 2022, a company called Lynk launched Lynk Tower 1, poised to be the world’s first commercial satellite cell tower, into space. The cell tower, pictured here, is said to be the first of four that Lynk plans to launch into orbit this year. Once they’re in place and contracts with terrestrial cellular service providers are set up, the 4 billion people who hardly ever have adequate cellular reception will finally be able to respond in the plural when asked “How many bars you got?” Lynk Global Self-Made Manufacturing What’s more meta than using a 3D printer to make parts for a 3D printer? This device looks like a bunch of separate tubes packaged together. But it is actually a single unit that was built that way inside a 3D printer. It is a precision-engineered heat exchanger—optimized to improve the cooling of shielding gas that keeps impurities from fouling the additive manufacturing process that occurs inside an industrial 3D printer. No paper jams here. Hyperganic None the Worse for Wearable How are we to benefit from the physical and cognitive enhancements that electronic wearables could someday provide if everyday aspects of human life such as breaking a sweat are hazardous to these devices? Not to worry. In a recent paper, a research team at the University of California, Los Angeles, reported that it has the problem licked. They developed a human-machine interface that is impervious to moisture. And, as if being waterproof weren’t enough, the four-button device has been engineered to generate enough electric current to power its own operation when any of the buttons is pressed. So, it can go just about anywhere we go, with no concerns about spills, splashes, sweat, or spent batteries. JUN CHEN RESEARCH GROUP/UCLA

  • Climate Change is NSF Engineering Alliance’s Top Research Priority
    by Kathy Pretz on 20. Septembra 2022. at 20:00

    Since its launch in April 2021, the Engineering Research Visioning Alliance has convened a diverse set of experts to explore three areas in which fundamental research could have the most impact: climate change; the nexus of biology and engineering; and securing critical infrastructure against hackers. To identify priorities for each theme, ERVA—an initiative funded by the U.S. National Science Foundation—holds what are termed visioning events, wherein IEEE members and hundreds of other experts from academia, industry, and nonprofits can conceptualize bold ideas. The results are distilled into reports that identify actionable priorities for engineering research pursuit. Reports from recent visioning events are slated to be released to the public in the next few months. IEEE is one of more than 20 professional engineering societies that have joined ERVA as affiliate partners. Research energy storage and greenhouse gas capture solutions Identifying technologies to address the climate crisis was ERVA’s first theme. The theme was based on results of a survey ERVA conducted last year of the engineering community about what the research priorities should be. “The resounding answer from the 500 respondents was climate change,” says Dorota Grejner-Brzezinska, EVRA’s principal investigator. She is a vice president for knowledge enterprise at Ohio State University, in Columbus. During the virtual visioning event in December, experts explored solar and renewable energy, carbon sequestration, water management, and geoengineering. The climate change task force released its report last month. These are some of the research areas ERVA said should be pursued: Energy storage, transmission, and critical materials. The materials include those that are nanoengineered, ones that could be used for nontraditional energy storage, and those that can extract additional energy from heat cycles. Greenhouse gas capture and elimination. Research priorities included capturing and eliminating methane and nitrous oxide released in agriculture operations. Resilient, energy-efficient, and healthful infrastructure. One identified priority was research to develop low-cost coatings for buildings and roads to reduce heat effects and increase self-cooling. Water, ecosystem, and geoengineering assessments. The report identifies research in creating sensing, measuring, and AI models to analyze the flow of water to ensure its availability during droughts and other disruptive events caused or worsened by climate change. “The groundwork ERVA has laid out in this report creates a blueprint for funders to invest in,” Grejner-Brzezinska says, “and catalyzes engineering research for a more secure and sustainable world. As agencies and research organizations enact legislation to reduce carbon emissions and bolster clean-energy technologies, engineering is poised to lead with research and development.” IEEE is developing a strategy to guide the organization’s response to the global threat. Use biology and engineering to interrupt the transfer of viruses A virtual visioning event on Leveraging Biology to Power Engineering Impact was held in March. The hope, as explained on the event’s website, is to transform research where biology and engineering intersect: health care and medicine, agriculture, and high tech. “As agencies and research organizations enact legislation to reduce carbon emissions and bolster clean-energy technologies, engineering is poised to lead with research and development.” The experts considered research directions in three areas: Use biology to inspire engineers to develop new components, adapt and adopt biological constructs beyond their original function, and create engineering systems and components that improve on biology. An example would be to interrupt the transfer of viruses from one species to another so as to reduce the spread of diseases. The task force’s report on which research areas to pursue is scheduled to be released next month, according to Grejner-Brzezinska. Protect infrastructure from hackers One of today’s main engineering challenges, according to ERVA, is the protection of infrastructure against hackers and other threats. At the in-person visioning event held last month at MIT on the Engineering R&D Solutions for Unhackable Infrastructure theme, researchers discussed gaps in security technologies and looked at how to design trustworthy systems and how to build resilience into interdependent infrastructures. ERVA describes unhackable as the ability to ensure safety, security, and trust in essential systems and services that society relies on. The task force examined research themes related to physical infrastructure such as assets and hardware; software and algorithms; and data and communication networks. It also considered new security methods for users, operators, and security administrators to thwart cyberattacks. Grejner-Brzezinska says the task force’s report will be released in mid-December. Sustainable transportation networks Planning has begun for the next visioning event, Sustainable Transportation Networks, to be held virtually on 2 and 3 November. The session is to explore innovative and sustainable transportation modes and the infrastructure networks needed to support them. Some of the areas to be discussed are green construction; longitudinal impact studies; interconnected transportation modes such as rail, marine, and air transport; and transportation equity. Become an ERVA supporter ERVA will convene four visioning events each year on broad engineering research themes that have the potential to solve societal challenges, Grejner-Brzezinska says. IEEE members who are experts in the fields can get involved by joining the ERVA Champions, now more than 900 strong. They are among the first to learn about upcoming visioning sessions and about openings to serve on volunteer groups such as thematic task forces, advisory boards, and standing councils. Members can sign up on the ERVA website. “Becoming a champion is an opportunity to break out of your silos of disciplines and really come together with others in the engineering research community,” Grejner-Brzezinska says. “You can do what engineers do best: solve problems.”

  • California’s Proposed Law Could Change the Internet
    by Rahul Rao on 20. Septembra 2022. at 16:00

    Today, for better or worse, the Internet is a rather free range for children. Websites ask their users’ ages, sure. But virtually anyone who came of age around the rise of the Internet can probably relate a time or 20 when they gave a false birthdate. A California law now in the works might bring that world to a crashing halt. AB 2273, or the California Age-Appropriate Design Code Act, promises to make the Internet safer for children—in part by tightening age verification. Its opponents instead believe that, in the process, AB 2273 could completely decimate the existing Internet as we know it. AB 2273 isn’t final just yet. To become California law, a bill has to pass both houses of the state legislature—the Assembly and the Senate—and then attain the signature of the governor. AB 2273 passed the Assembly on 29 August, and the Senate the next day, posting it to Governor Gavin Newsom’s desk. As of this writing, Newsom has yet to sign the bill. There’s little indication whether he will. Suppose he does sign. Then, beginning on 1 July 2024, any website or app that “conducts business in California” and “provides an online service, product, or feature likely to be accessed by children” would need to follow yet-to-be-crafted code. California wouldn’t be the first jurisdiction to tighten age-related design standards for websites. AB 2273 explicitly cites an existing law in the United Kingdom, which expects websites to comply with a bespoke age-appropriate design code. (In fact, both bills share a backer, one Baroness Beeban Kidron, a campaigner for children’s rights online.) That U.K. law has already made ripples. YouTube disabled its autoplay feature for users under 18. Instagram started preventing adults from messaging under-18s who don’t follow them. TikTok stopped sending under-18s push notifications after a certain point each evening. But according to Eric Goldman, a law professor at Santa Clara University and one of the bill’s harshest critics, in a U.S. regulatory environment that’s generally even less friendly to businesses, California’s code is likely to be stricter. “Any ‘lessons learned’ in the U.K. do not extend to the U.S. because the law literally cannot be implemented in the same way,” he says. What does California’s AB 2273 require tech companies to do? Though California’s code doesn’t yet exist, AB 2273 lays out a few requirements. For one, websites must report their data-management practices to a California government agency. Also, websites can’t collect or sell data on children (including geolocation) that isn’t absolutely necessary for children to use the website. And websites must tell a child when a parent or guardian is tracking their activity on that site. Where AB 2273 becomes more than a little controversial is the requirement that, to determine which users ought to experience what, websites must “estimate the age of child users with a reasonable level of certainty.” “Assuming businesses do not want to intentionally degrade their value proposition to adults, then they have no alternative other than to authenticate the age of all of their customers and then segregate adults from children, with different offerings for each,” says Goldman. How a website will “estimate the age of child users” isn’t clear, and according to Techdirt, it might vary by website. A child entering a “high-risk” website, then, might need to submit an ID document for age verification. That failing, a child might literally have to scan their face. Not only is face recognition a technology whose reliability is questionable, mandating it could make websites inaccessible to people without a functioning camera. And although the law champions privacy, it’s not clear that authentication along those lines could even be done in a privacy-conscious manner. Goldman says that websites might rely on insecure third-party services. If AB 2273 passes, then its effects could spread well beyond the state’s borders. Websites will be left with two options: geolocating users in California (perhaps blocking them completely, potentially risking revenue), or applying the rules to all their users. Many websites will just find it easier to do the latter. Then around the world, users might have to face the same age-authentication gauntlet that Californians would. And, according to Goldman, other jurisdictions might take after California in drafting their own laws. Some of AB 2273’s sponsors and defenders see the bill as a necessary measure in a world where children are vulnerable to dangers like manipulative websites, invasive apps, and social-media addiction. But from many corners, the reaction has been less than positive. AB 2273 has garnered a wide range of opponents, including privacy advocates and big tech. Santa Clara’s Goldman likens the law to a neutron bomb. “It will depopulate the Internet and turn many services into ghost towns,” he says.Of course, this is all still hypothetical. For now, the bill awaits Governor Newsom’s signature. Even if that happens, AB 2273 is hardly immune to lawsuits. NetChoice—an advocacy group that has helped take other laws passed in Florida and Texas to court—has already come out against the bill.

  • Upcyling a 40-year-old Tandy Model 100 Portable Computer
    by Stephen Cass on 20. Septembra 2022. at 15:00

    Last year I picked up a Tandy Model 100 at the Vintage Computer Festival East for about US $90. Originally released in 1983, it was the forerunner of today’s notebook computers, featuring a good-quality keyboard and LCD display. It could run for 20 hours on four AA batteries and a month on standby. Thanks to the work of the Club 100 user group, I was able to tap into a universe of software written for the Model 100 (also known as the M100). Unfortunately, my machine stopped working. I was able to identify the faulty component, and rather than attempt to find a new replacement, I bought a cheap, broken M100 that was being sold for parts on eBay. I extracted the component I needed from its motherboard and repaired my original M100. Then I looked at the now-even-more-broken second M100, still with its lovely keyboard and screen, and thought, “Surely there’s something I can do with this.” How hard could it be to swap out a 40-year-old 8-bit 8085 CPU and motherboard for something more modern? I’m not the first person to have thought of this, of course. A number of folks have upcycled the M100, but they typically replace the 240-by-64-pixel monochrome display with something with color and much higher resolution, or they keep the original LCD but use it as a text-only display. I wanted to keep the original display, because I like its big, chunky pixels and low power needs, but I also wanted the ability to support graphics and different fonts, as with the original M100. If I could do that, I could use any number of replacement CPUs, thanks to software like CircuitPython’s displayio libraries. But I soon discovered the challenge was in the M100’s deeply weird—by today’s standards—LCD. The M100’s LCD is really 10 separate displays, each controlled by its own HD44102 driver chip. The driver chips are each responsible for a 50-by-32-pixel region of the screen, except for two chips at the right-hand side that control only 40 by 32 pixels. This provides a total screen resolution of 240 by 64 pixels. Within each region the pixels are divided into four rows, or banks, each eight pixels high. Each vertical column of eight pixels corresponds to one byte in a driver’s local memory. Vintage Tandy M100 computers [left] can be bought for parts for less than US $100. A interface shield along with a resistor and capacitor [right, top] can plug into an Arduino Mega microcontroller and allow you to repurpose the screen and keyboard.James Provost To set an arbitrary pixel, you determine the screen region it’s in, enable the corresponding driver chip, tell the chip you are sending a command, send the command to select a bank and column, tell the chip you’re now sending pixel data, and then write a data byte that sets eight pixels at once, including the one you want and seven others that come along for the ride. The reason for this arrangement is that it speeds things up considerably when displaying text. If you have a seven-pixel-high font, plus one pixel of blank space at the bottom, you can copy the font’s bitmap straight from memory byte by byte. Sequential bytes can often be sent without additional commands because the chip automatically advances the column index after receiving a data byte. The order of the banks as displayed can also be altered for fast scrolling. This bank/column addressing scheme is still used, for example, in some modern OLED displays, but their banks span the entire display—that is, one chip per screen. I would have to manage each region and driver myself. Cross a wire by accident? No problem, just fix it and try again. Some things made it easier. First, the M100 was designed to be serviced. The screen drivers sit on a board that interfaces with the motherboard via a 15-by-2-pin connector that can be simply pulled free. The keyboard uses a straightforward 10-by-10 matrix, and also connects via easily detachable connectors. There is a fantastic service manual that gives the details of every single circuit. With the service manual, the HD44102’s datasheet, and some helpful online tips from other folks who’d played with the LCD, I was able to build an interface between the display and an Arduino Mega 2560. And the fact that older machines are often more tolerant of abuse also helped—none of this “give me even a half a volt over 3.3 volts and I’ll let all the magic smoke out” business. Cross a wire by accident? No problem, just fix it and try again. Feed in a raw pulse-width-modulated (PWM) signal instead of a constant analog one? Fine, I’ll just sit here and flicker a bit. The interface provides the -5 V the LCD needs in addition to +5 V. The interface also hosts a RC low-pass filter to smooth the PWM signal that simulates the 0-to-4 V output of a potentiometer used to adjust the viewing angle. The other pins are passed through to the Mega’s digital input/output or power lines. Ten driver chips each control a region of the screen, and must be selected as required by one of 10 chip select lines. Then a bank and column within that row is selected to receive a byte of bitmapped data, setting eight pixels at once.James Provost I wrote some code to store a 240-by-64-pixel framebuffer and to handle the mapping of its pixels to their corresponding screen regions. The software selects the appropriate chip, bank, and column, sends the data, and manages the various clock and other control signals. The Mega appears to the outside world as the driver of a modern monochrome display, accepting bitmap data as rows (or columns) of pixels that span the screen—exactly the kind of thing that the displayio library can handle. The LCD can now be hooked up to the microcontroller of my choice via a parallel or serial connection to the Mega, which copies incoming data to the framebuffer; I intend to use a Teensy 4.1, which will allow me to talk to the matrix keyboard directly, have enough compute power for some basic text-editing firmware, and provide a VT100 terminal serial interface—which could be to a Raspberry Pi 4 compute module also mounted inside the M100. That would provide Wi-Fi, a 64-bit OS, and up to 8 gigabytes of RAM—a big step up from the 8 to 24 kilobytes that the case originally housed! This article appears in the October 2022 print issue as “Upcycling a Tandy Model 100.”

  • We Can Now Train Big Neural Networks on Small Devices
    by Matthew Hutson on 20. Septembra 2022. at 13:02

    The gadgets around us are constantly learning about our lives. Smartwatches pick up on our vital signs to track our health. Home speakers listen to our conversations to recognize our voices. Smartphones play grammarian, watching what we write in order to fix our idiosyncratic typos. We appreciate these conveniences, but the information we share with our gadgets isn’t always kept between us and our electronic minders. Machine learning can require heavy hardware, so “edge” devices like phones often send raw data to central servers, which then return trained algorithms. Some people would like that training to happen locally. A new AI training method expands the training capabilities of smaller devices, potentially helping to preserve privacy. The most powerful machine-learning systems use neural networks, complex functions filled with tunable parameters. During training, a network receives an input (such as a set of pixels), generates an output (such as the label “cat”), compares its output with the correct answer, and adjusts its parameters to do better next time. To know how to tune each of those internal knobs, the network needs to remember the effect of each one, but they regularly number in the millions or even billions. That requires a lot of memory. Training a neural network can require hundreds of times the memory called upon when merely using one (also called “inference”). In the latter case, the memory is allowed to forget what each layer of the network did as soon as it passes information to the next layer. To reduce the memory demanded during the training phase, researchers have employed a few tricks. In one, called paging or offloading, the machine moves those activations from short-term memory to a slower but more abundant type of memory such as flash or an SD card, then brings it back when needed. In another, called rematerialization, the machine deletes the activations, then computes them again later. Previously, memory-reduction systems used one of those two tricks or, says Shishir Patil, a computer scientist at the University of California, Berkeley, and the lead author of the paper describing the innovation, they were combined using “heuristics” that are “suboptimal,” often requiring a lot of energy. The innovation reported by Patil and his collaborators formalizes the combination of paging and rematerialization. “Taking these two techniques, combining them well into this optimization problem, and then solving it—that’s really nice,” says Jiasi Chen, a computer scientist at the University of California, Riverside, who works on edge computing but was not involved in the work. In July, Patil presented his system, called POET (private optimal energy training), at the International Conference on Machine Learning, in Baltimore. He first gives POET a device’s technical details and information about the architecture of a neural network he wants it to train. He specifies a memory budget and a time budget. He then asks it to create a training process that minimizes energy usage. The process might decide to page certain activations that would be inefficient to recompute but rematerialize others that are simple to redo but require a lot of memory to store. One of the keys to the breakthrough was to define the problem as a mixed integer linear programming (MILP) puzzle, a set of constraints and relationships between variables. For each device and network architecture, POET plugs its variables into Patil’s hand-crafted MILP program, then finds the optimal solution. “A main challenge is actually formulating that problem in a nice way so that you can input it into a solver,” Chen says. “So, you capture all of the realistic system dynamics, like energy, latency, and memory.” The team tested POET on four different processors, whose RAM ranged from 32 KB to 8 GB. On each, the researchers trained three different neural network architectures: two types popular in image recognition (VGG16 and ResNet-18), plus a popular language-processing network (BERT). In many of the tests, the system could reduce memory usage by about 80 percent, without a big bump in energy use. Comparable methods couldn’t do both at the same time. According to Patil, the study showed that BERT can now be trained on the smallest devices, which was previously impossible. “When we started off, POET was mostly a cute idea,” Patil says. Now, several companies have reached out about using it, and at least one large company has tried it in its smart speaker. One thing they like, Patil says, is that POET doesn’t reduce network precision by “quantizing,” or abbreviating, activations to save memory. So the teams that design networks don’t have to coordinate with teams that implement them in order to negotiate trade-offs between precision and memory. Patil notes other reasons to use POET besides privacy concerns. Some devices need to train networks locally because they have low or no Internet connection. These include devices used on farms, in submarines, or in space. Other setups can benefit from the innovation because data transmission requires too much energy. POET could also make large devices—Internet servers—more memory efficient and energy efficient. But as for keeping data private, Patil says, “I guess this is very timely, right?”

  • Disentangling the Facts From the Hype of Quantum Computing
    by James S. Clarke on 19. Septembra 2022. at 14:00

    This is a guest post in recognition of IEEE Quantum Week 2022. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE. Few fields invite as much unbridled hype as quantum computing. Most people’s understanding of quantum physics extends to the fact that it is unpredictable, powerful, and almost existentially strange. A few years ago, I provided IEEE Spectrum an update on the state of quantum computing and looked at both the positive and negative claims across the industry. And just as back in 2019, I remain enthusiastically optimistic today. Even though the hype is real and has outpaced the actual results, much has been accomplished over the past few years. First, let’s address the hype. Over the past five years, there has been undeniable hype around quantum computing—hype around approaches, timelines, applications, and more. As far back as 2017, vendors were claiming the commercialization of the technology was just a couple of years away. There was even what I’d call antihype, with some questioning if quantum computers would materialize at all. I hope they end up being wrong. More recently, companies have shifted their timelines from a few years to a decade, but they continue to release road maps showing commercially viable systems as early as 2029. And these hype-fueled expectations are becoming institutionalized: The Department of Homeland Security even released a road map to protect against the threats of quantum computing, in an effort to help institutions transition to new security systems. This creates an “adopt or you’ll fall behind” mentality for both quantum-computing applications and postquantum cryptography security. Market research firm Gartner (of the “Hype Cycle” fame) believes quantum computing may have already reached peak hype, or phase two of its five-phase growth model. This means the industry is about to enter a phase called “the trough of disillusionment." According to McKinsey & Company, “fault tolerant quantum computing is expected between 2025 and 2030 based on announced hardware roadmaps for gate-based quantum computing players.” I believe this is not entirely realistic, as we still have a long journey to achieve quantum practicality—the point at which quantum computers can do something unique to change our lives. In my opinion, quantum practicality is likely still 10 to 15 years away. However, progress toward that goal is not just steady; it’s accelerating. That’s the same thing we saw with Moore’s Law and semiconductor evolution: The more we discover, the faster we go. Semiconductor technology has taken decades to progress to its current state, accelerating at each turn. We expect similar advancement with quantum computing. In fact, we are discovering that what we have learned while engineering transistors at Intel is also helping to speed our quantum-computing development work today. For example, when developing silicon spin qubits, we’re able to leverage existing transistor-manufacturing infrastructure to ensure quality and to speed up fabrication. We’ve started the mass production of qubits on a 300-millimeter silicon wafer in a high-volume fab facility, which allows us to fit an array of more than 10,000 quantum dots on a single wafer. We’re also leveraging our experience with semiconductors to create a cryogenic quantum control chip, called Horse Ridge, which is helping to solve the interconnect challenges associated with quantum computing by eliminating much of the cabling that today crowds the dilution refrigerator. And our experience with testing semiconductors has led to the development of the cryoprober, which enables our team to get testing results from quantum devices in hours instead of the days or weeks it used to take. Others are likely benefiting from their own prior research and experience, as well. For example, Quantinuum’s recent research showed the entanglement of logical qubits in a fault-tolerant circuit using real-time quantum error correction. While still primitive, it’s an example of the type of progress needed in this critical field. For its part, Google has a new open-source library called Cirq for programming quantum computers. Along with similar libraries from IBM, Intel, and others, Cirq is helping drive development of improved quantum algorithms. And, as a final example, IBM’s 127-qubit processor, called Quantum Eagle, shows steady progress toward upping the qubit count. The author shows Intel quantum-computing prototypes.Intel There are also some key challenges that remain. First, we still need better devices and high-quality qubits. While the very best one- and two-qubit gates meet the needed threshold for fault tolerance, the community has yet to accomplish that on a much larger system. Second, we’ve yet to see anyone propose an interconnect technology for quantum computers that is as elegant as how we wire up microprocessors today. Right now, each qubit requires multiple control wires. This approach is untenable as we strive to create a large-scale quantum computer. Third, we need fast qubit control and feedback loops. Horse Ridge is a precursor for this, because we would expect latency to improve by having the control chip in the fridge and therefore closer to the qubit chip. And finally, error correction. While there have been some recent indications of progress to correction and mitigation, no one has yet run an error-correction algorithm on a large group of qubits. With new research regularly showing novel approaches and advances, these are challenges we will overcome. For example, many in the industry are looking at how to integrate qubits and the controller on the same die to create quantum system-on-chips (SoCs). But we’re still quite a way off from having a fault tolerant quantum computer. Over the next 10 years, Intel expects to be competitive (or pull ahead) of others in terms of qubit count and performance, but as I stated before, a system large enough to deliver compelling value won’t be realized for 10 to 15 years, by anyone. The industry needs to continue its evolution of qubit counts and quality improvement. After that, the next milestone should be the production of thousands of quality qubits (still several years away), and then scaling that to millions. Let’s remember that it took Google 53 qubits to create an application that could accomplish a supercomputer function. If we want to explore new applications that go beyond today’s supercomputers, we’ll need to see system sizes that are orders of magnitude larger. Quantum computing has come a long way in the past five years, but we still have a long way to go, and investors will need to fund it for the long term. Significant developments are happening in the lab, and they show immense promise for what could be possible in the future. For now, it’s important that we don’t get caught up in the hype but focus on real outcomes. Correction 21 Sept. 2022: A previous version of this post stated incorrectly that the release of an announced 5,000-qubit quantum computer in 2020 did not happen. It did. Spectrum regrets the error.

  • Defining the Future Using Next Generation IP Intelligence Solutions
    by Clarivate on 19. Septembra 2022. at 12:19

    With the continued growth in intellectual property and related innovation data, how confident are you that your intelligence tools are delivering the insights you need to make the right IP decisions? Can these tools give you an accurate picture of your technology domain, competitive activity and emerging threats? Register now for this free webinar! By combining enhanced IP data with powerful search technology, next generation tools make it easy for the world’s IP and business executives to find actionable insights and make higher confidence R&D, IP and business decisions. Register for this webinar where we will cover the following topics: Today’s challenges for data-driven, innovative organizations such as how to bring innovation to market faster. How can you sharpen your competitive edge by including IP data in your analyses and workflows? What critical insights can you gain by correlating patent, trademark, litigation, non-patent literature, firmographic, and proprietary 3rd party data? The Clarivate way - Close the confidence gap in your IP research and analysis Our speaker: Rohit Gole Principal Consultant, Clarivate

  • Coding Made AI—Now, How Will AI Unmake Coding?
    by Craig S. Smith on 19. Septembra 2022. at 12:00

    Are coders doomed? That question has been bouncing around computer-programming communities ever since OpenAI’s large language model, GPT-3, surprised everyone with its ability to create html websites from simple written instructions. In the months since, rapid-fire advances have led to systems that can write complete, albeit simple, computer programs from natural-language descriptions—spoken or written human language—and automated coding assistants that speed the work of computer programmers. How far will artificial intelligence go in replacing or augmenting the work of human coders? According to the experts IEEE Spectrum consulted, the bad news is coding as we know it may indeed be doomed. But the good news is computer programming and software development appears poised to remain a very human endeavor for the foreseeable future. In the meantime, AI-powered automated code generation will increasingly speed software development by allowing more code to be written in a shorter time. Programmers will not always need to learn a programming language. That will open software development to a much broader population. “I don’t believe AI is anywhere near replacing human developers,” said Vasi Philomin, Amazon’s vice president for AI services, adding that AI tools will free coders from routine tasks, but the creative work of computer programming will remain. If someone wants to become a developer, say, 10 years down the line, they won’t necessarily need to learn a programming language. Instead, they will need to understand the semantics, concepts, and logical sequences of building a computer program. That will open software development to a much broader population. When the programming of electronic computers began in the 1940s, programmers wrote in numerical machine code. It wasn’t until the mid-1950s that Grace Hopper and her team at the computer company Remington Rand developed FLOW-MATIC, which allowed programmers to use a limited English vocabulary to write programs. Since then, programming has climbed a ladder of increasingly efficient languages that allow programmers to be more productive. AI-written code is the cutting edge of a broader movement to allow people to write software without having to code at all. Already, with platforms like Akkio, people can build machine-learning models with simple drag, drop, and button-click features. Users of Microsoft’s Power Platform, which includes a family of low-code products, can generate simple applications by just describing them. In June, Amazon released CodeWhisperer, a coding assistant for programmers, like GitHub’s Copilot, which was first released in limited preview in June 2021. Both tools are based on large language models (LLMs) that have been trained on massive code repositories. Both offer autocomplete suggestions as a programmer writes code or suggest executable instructions from simple natural-language phrases. “There needs to be some incremental refinement, some conversation between the human and the machine.”—Peter Schrammel, Diffblue A GitHub survey of 2,000 developers found that Copilot cuts in half the time it takes for certain coding tasks and raised overall developer satisfaction in their work. But to move beyond autocompletion, the problem is teaching the intent to the computer. Software requirements are usually vague, while natural language is notoriously imprecise. “To resolve all these ambiguities in English written specification, there needs to be some incremental refinement, some conversation between the human and the machine,” said Peter Schrammel, cofounder of Diffblue, which automates the writing of unit tests for Java. To address these problems, researchers at Microsoft have recently proposed adding a feedback mechanism to LLM-based code generation so that the computer asks the programmer for clarification of any ambiguities before generating code. The interactive system, called TiCoder, refines and formalizes user intent by generating what is called a “test-driven user-intent formalization”—which attempts to use iterative feedback to divine the programmer’s algorithmic intent and then generate code that is consistent with the expressed intentions. According to their paper, TiCoder improves the accuracy of automatically generated code by up to 85 percent from 48 percent, when evaluated on the Mostly Basic Programming Problems (MBPP) benchmark. MBPP, meant to evaluate machine-generated code, consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers. A unit of code, which can be hundreds of lines long, is the smallest part of a program that can be maintained and executed independently. A suite of unit tests, typically consisting of dozens of unit tests, each of them between 10 and 20 lines of code, checks that the unit executes as intended, so that when you stack the units together, the program works as intended. Unit tests are useful for debugging individual functions and for detecting errors when code is manually changed. But a unit test can also be used as the specification for the unit of code and can be used to guide programmers to write clean, bug-free code. While not many programmers pursue true test-driven development, in which the unit tests are written first, unit test and units are generally written together. Hand-coding software programs will increasingly be like hand-knitting sweaters. According to a survey by Diffblue, developers spend roughly 35 percent of their time writing quality-control tests (as opposed to writing code destined for production use), so there are significant productivity gains to be made just by automating a part of this. Meanwhile, Github’s Copilot, Amazon’s CodeWhisperer, and AI programming assistant packages can be used as interactive auto-completion tools for writing unit tests. The programmer is given suggestions and picks the one that they think will work best. Diffblue’s system, called Diffblue Cover, uses reinforcement learning to write unit tests automatically, with no human intervention. Earlier this year, Google’s U.K.-based, artificial intelligence lab, DeepMind, went further in fully automatic code generation with AlphaCode, a large language model that can write simple computer programs from natural-language instructions. AlphaCode uses an encoder-decoder transformer architecture, first encoding the natural-language description of the problem and then decoding the resulting vector into code for a solution. The model was first trained on the GitHub code repository until the model was able to produce reasonable-looking code. To fine-tune the model, DeepMind used 15,000 pairs of natural-language problem descriptions and successful code solutions from past coding competitions to create a specialized data set of input-output examples. Once AlphaCode was trained and tuned, it was tested against problems it hadn’t seen before. “I don’t believe AI is anywhere near replacing human developers. It will remove the mundane, boilerplate stuff that people have to do, and they can focus on higher-value things.”—Vasi Philomin, Amazon The final step was to generate many solutions and then use a filtering algorithm to select the best one. “We created many different program possibilities by essentially sampling the language model almost a million times,” said Oriol Vinyals, who leads DeepMind’s deep-learning team. To optimize the sample-selection process, DeepMind uses a clustering algorithm to divide the solutions into groups. The clustering process tends to group the working solutions together, making it easier to find a small set of candidates that are likely to work as well as those written by human programmers. To test the system, DeepMind submitted 10 AlphaCode-written programs to a human coding competition on the popular Codeforces platform where its solutions ranked among the top 54 percent. “To generate a program, will you just write it in natural language, no coding required, and then the solution comes out at the other end?” Vinyals asked rhetorically in a recent interview. “I believe so.” Vinyals and others caution that it will take time, possibly decades, to reach that goal. “We are still very far away from when a person would be able to tell a computer about the requirements for an arbitrary complex computer program, and have that automatically get coded,” said Andrew Ng, a founder and CEO of Landing AI who is an AI pioneer and founding lead of Google Brain. But given the speed at which AI-code generation has advanced in a few short years, it seems inevitable that AI systems will eventually be able to write code from natural-language instructions. Hand-coding software programs will increasingly be like hand-knitting sweaters. To give natural-language instructions to a computer, developers will still need to understand some concepts of logic and functions and how to structure things. They will still need to study foundational programming, even if they don’t learn specific programming languages or write in computer code. That will, in turn, enable a wider range of programmers to create more and more varied kinds of software. “I don’t believe AI is anywhere near replacing human developers,” Amazon’s Philomin said. “It will remove the mundane, boilerplate stuff that people have to do, and they can focus on higher-value things.” Diffblue’s Schrammel agrees that AI-automated code generation will allow software developers to focus on more difficult and creative tasks. But, he adds, there will at least need to be one interaction with a human to confirm what the machine has understood is what the human intended. “Software developers will not lose their jobs because an automation tool replaces them,” he said. “There always will be more software that needs to be written.”

  • Video Friday: Loona
    by Evan Ackerman on 16. Septembra 2022. at 18:19

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. IROS 2022: 23–27 October 2022, KYOTO, JAPAN ANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELES CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND Enjoy today's videos! Another robotic pet on Kickstarter, another bunting of red flags. Let's see, we've got: "she's so playful and affectionate you'll forget she's a robot." "Everything you can dream of in a best friend and more." "Get ready to fall in love!" And that's literally like the first couple of tiles on the Kickstarter post. Look, the hardware seems fine, and there is a lot of expressiveness going on, I just wish they didn't set you up for an inevitable disappointment when after a couple of weeks it becomes apparent that yes, this is just a robotic toy, and will never be your best friend (or more). Loona is currently on Kickstarter for about USD $300. [ Kickstarter ] Inspired by the flexibility and resilience of dragonfly wings, we propose a novel design for a biomimetic drone propeller called Tombo propeller. Here, we report on the design and fabrication process of this biomimetic propeller that can accommodate collisions and recover quickly, while maintaining sufficient thrust force to hover and fly. [ JAIST ] Thanks Van! Meet Tom, a software engineer at Boston Dynamics, as he shares insights on programming and testing the practical—and impractical—applications of robotics. Whether Spot is conducting inspections or playing an instrument, learn how we go from code on a computer to actions in the real world. Yeah, but where do I get that awesome shirt?! [ Boston Dynamics ] This Ameca demo couples automated speech recognition with GPT 3 —a large language model that generates meaningful answers—the output is fed to an online TTS service which generates the voice and visemes for lip sync timing. The team at Engineered Arts Ltd. pose the questions. "Meaningful answers." [ Engineered Arts ] The ANT project develops a navigation and motion control system for future walking systems for planetary exploration. After successful testing on ramps and rubble fields, the challenge of climbing rough inclines such as craters is being tackled. [ DFKI ] Look, if you’re going to crate-train Spot, at least put some blankets and stuffed animals in there or something. [ Energy Robotics ] With multitrade layout, all trades’ layouts are set down with a single pass over the floor by Dusty's FieldPrinter system. Trades experience unparalleled clarity and communication with each other, because they can see each others’ installation plans and immediately identify and resolve conflicts. Instead of fighting over the floor and pointing fingers, they start to solve problems together. [ Dusty Robotics ] We present QUaRTM—a novel quadcopter design capable of tilting the propellers into the forward flight direction, which reduces the drag area and therefore allows for faster, more agile, and more efficient flight. [ HiPeRLab ] Is there an option in the iRobot app to turn my Roomba into a cake? Because I want cake. [ iRobot ] Looks like SoftBank is getting into high-density robotic logistics. [ Impress ] GITAI S2 ground test for space debris removal. During this demonstration, a tool changer was also tested to perform several different tasks at OSAM. [ GITAI ] Recent advances allow for the automation of food preparation in high-throughput environments, yet the successful deployment of these robots requires the planning and execution of quick, robust, and ultimately collision-free behaviors. In this work, we showcase a novel framework for modifying previously generated trajectories of robotic manipulators in highly detailed and dynamic collision environments. [ Paper ] The LCT Hospital in South Korea uses “Dr. LCT” for robotic-based orthopedic knee procedures. The system is based on the KUKA LBR Med robotic platform, which is ideally suited for orthopedic surgery with its seven axes, software developed specifically for medical technology, and appropriate safety measures. [ Kuka ] A year in review. Compilation of 2022 video highlights of the Game Changing Development (GCD) Program. The Game Changing Development Program is a part of NASA’s Space Technology Mission Directorate. The program advances space technologies that may lead to entirely new approaches for the agency’s future space missions and provide solutions to significant national needs. [ NASA ] Naomi Wu reviews a Diablo mobile robot (with some really cool customizations of her own), sending it out to run errands in Shenzhen during lockdown. [ Naomi Wu ] Roundtable discussion on how teaching automation in schools, colleges, and universities can help shape the workers of tomorrow. ABB Robotics has put together a panel of experts in this field to discuss the challenges and opportunities. [ ABB ] On 8 September 2022, Mario Santillo of Ford talked to robotics students as the first speaker in the Undergraduate Robotics Pathways & Careers Speaker Series, which aims to answer the question “What can I do with a robotics degree?” [ Michigan Robotics ]

  • Take a Trip Through Switzerland’s Museum of Consumer Electronics
    by Joanna Goodrich on 16. Septembra 2022. at 18:00

    For more than a decade Museum ENTER, in Solothurn, Switzerland, has been a place where history buffs can explore and learn about the development and growth of computer and consumer electronics in Switzerland and the rest of the world. On display are computers, calculators, floppy disks, phonographs, radios, video game consoles, and related objects. Thanks to a new four-year partnership between the museum and the IEEE Switzerland Section, IEEE members may visit the facility for free. They also can donate their time to help create exhibits; translate pamphlets, display cards, and other written media; and present science, technology, engineering, and math workshops. The technology on display includes televisions and radios from the 1950s.ENTER Museum Collections of calculators, radios, telephones, and televisions ENTER started as the private collection of Swiss entrepreneur Felix Kunz, who had been amassing computers and other electronics since the mid-1970s. Kunz and Peter Regenass—a collector of calculators—opened the museum in 2011 near the Solothurn train station. The museum’s collection focuses on the history of technology made in Switzerland by companies including Bolex, Crypto AG, and Gretag. The technology on display includes early telegraphs, telephones, televisions, and radios. There are 300 mechanical calculators from Regenass’s collection. One of the mechanical calculators, Curta, looks like a pepper mill and has more than 700 parts. The museum also has several Volksempfängers, the early radio models used by the Nazis to spread propaganda. Visitors can check out the collection of working Apple computers, which the museum claims is the largest in Europe. Free admission, discounts, and STEM education courses The IEEE Switzerland Section began its partnership with the museum last year, when the student branch at the IEEE EPFL hosted a presentation there, says IEEE Senior Member Mathieu Coustans, the Switzerland Section’s treasurer. In May, the section and the museum organized a workshop celebrating 100 years of radio broadcasting in Switzerland. IEEE members presented on the topic in French, Coustans says, and then translated the presentations to English. Based on the success of both events, he says, the section and the museum began to discuss how else they could collaborate. The two organizations discovered they have “many of the same goals,” says IEEE Member Violetta Vitacca, chief executive of the museum. They both aim to inspire the next generation of engineers, promote the history of technology, and bring together engineers from academia and industry to collaborate. The section and museum decided to create a long-term partnership to help each other succeed. In addition to the free visits, IEEE members receive a 10 percent discount on services offered by the museum, including digitizing books and other materials and repairing broken equipment such as radios and vintage record players. Members can donate historical artifacts too. In addition, IEEE groups are welcome to host conferences and section meetings at the facility. The IEEE Switzerland Section as well as members of student branches and the local IEEE Life Members Affinity Group have agreed to speak at events held at the museum and teach STEM classes there. “The museum is a space where both professional engineers and young people can network and learn from each other,” Vitacca says. “I think this partnership is a win-win for both IEEE and the museum.” She says she hopes that “collaborating with IEEE will help Museum ENTER gain an international reputation.” The perks of the collaboration will become “especially attractive with the opening of the brand-new Museum ENTER building” next year, says IEEE Senior Member Hugo Wyss, chair of the Switzerland Section, who led the partnership effort. Exhibits on gaming, inventors, and startups The museum is set to move in May to a larger building in the village of Derendingen. When it reopens there in November, these are some new additions visitors can look forward to: Audio guides, display cards, and pamphlets in German, English, and French. “The Academy,” which aims to inspire the next generation of engineers, offering workshops, lectures, and other events, as well as access to a technical library. A data digitization laboratory where collectors and electronics enthusiasts can convert vintage media carriers, records, and film. A public-gathering piazza with an attached café and meeting rooms. The museum offers STEM workshops. ENTER Museum In addition, these eight permanent exhibits will be available, the museum says: Game Area. A display featuring innovations that have driven the rise of gaming and high-performance computing. Hall of Brands. A showcase of technologies from well-known companies. Now. Current technology highlighted in the news. Show of Pioneers. A look at the inventors of popular consumer and computer electronics. Switzerland Connected. A showcase for the country’s former and current accelerators, startups, and schools. Time Travel. A retrospective look at 150 years of technology. Typology of Technology. Applications such as optical and magnetic recording used for music and film. The museum also plans to curate special exhibitions. “We are going from being simply a museum with an extensive collection to being a center for networking, education, and innovation,” Vitacca says. “That’s why it’s important for the museum to collaborate with IEEE. Our offerings are not only unique in Switzerland but also across Europe. IEEE is a great partner for us to help get the word out about what we do.”

  • Faster, Meaner, Deadlier: The Evolution of “BattleBots”
    by Stephen Cass on 15. Septembra 2022. at 16:13

    Earlier this year, friend-of-IEEE Spectrum and fashiontech designer Anouk Wipprecht gave a peek of what it’s like to be a competitor on “BattleBots,” the 22-year-old robot-combat competition, from the preparation “pit” to the arena. Her team, Ghostraptor, was knocked out of the regular competition after losing its first and second fights, though they regained some glory by winning a round in the bonus Golden Bolt tournament, which recently finished airing on the TBS TV channel. This week, tickets went on sale for audience seating for the next season of “BattleBots”; filming will commence in October in Las Vegas. We thought it was a good moment to get a different perspective on the show, so Spectrum asked one of the founders of “BattleBots” and its current executive producer, Greg Munson, about how two decades’ worth of technological progress has impacted the competition. What are the biggest changes you’ve seen, technology-wise, over 20 years or so? Greg Munson: Probably the biggest is battery technology. “BattleBots” premiered on Comedy Central in, I think it was, 2000. Now we’re 22 years later. In the early days, people were using car batteries. Then NiCad packs became very popular. But with the advent of lithium technology, when the battery packs could be different sizes and shapes, that’s when things just took off in terms of power-to-weight ratio. Now you can have these massively spinning disk weapons, or bar weapons, or drum weapons that can literally obliterate the other robot. Greg MunsonGabe Ginsberg/Getty Images Second is the [improvement in electronic speed control (ESC) circuitry]. We built a robot called Bombmachine back in the day. And besides its giant gel cell batteries, which were probably a third of the [bot’s total] weight, we had this big old Vantex speed controller with a big giant heat sink. The ESC form factors have gotten smaller. They’ve gotten more efficient. They’re able to handle way more amperage through the system, so they don’t blow up. They’ve got more technology built into them, so the team can have a person monitoring things like heat, and they’ll know when to, for instance, shut a weapon down. You see this a lot now on the show where they’re spinning up really fast, going in for a hit. And then they actually back off the weapon. And watchers will think, “Oh, the weapon’s dead.” But no, they’re actually just letting it cool down because the monitor guy has told his driver, “Hey, the weapon’s hot. I’m getting some readings from the ESC. The weapon’s hot. Give me five seconds.” That kind of thing. And that’s a tremendous strategy boon. So instead of just one-way remote control, teams are getting telemetry back from the robots now as well? Munson: A lot of that is starting to happen more and more, and teams like Ribbot are using that. I think they’re influencing other teams to go that route as well, which is great. Just having that extra layer of data during the fight is huge. CAD gives the robots more personality and character, which is perfect for a TV show. What other technologies have made a big difference? Munson: CAD is probably just as big of a technology boost since the ’90s compared to now. In the early “BattleBots” era, a lot of teams were using pencil and paper or little wooden prototypes. Only the most elite, fancy teams back then would use some early version of Solidworks or Autodesk. We were actually being hit up by the CAD companies to get more builders into designing in CAD. Back in the day, if you’re going to build a robot without CAD, you think very pragmatically and very form-follows-function. So you saw a lot of robots that were boxes with wheels and a weapon on top. That’s something you can easily just draw on a piece of paper and figure out. And now CAD is just a given. High-school students are designing things in CAD. But when you’ve got CAD, you can play around and reshape items, and you can get a robot like HyperShock—it looks like there’s no right-angled pieces on HyperShock. CAD gives the robots more personality and character, which is perfect for a TV show because we want the audience to go, “Hey, that’s HyperShock, my favorite!” Because of the silhouettes, because of the shape, it’s branded, it’s instantly identifiable—as opposed to a silver aluminum box that has no paint. It quickly became obvious that if there’s a battery fire in the pit, with the smoke and whatnot, that’s a no-go. When Anouk was writing about being a competitor, she pointed out that there’s quite a strict safety regime teams have to follow, especially with regard to batteries, which are stored and charged in a separate area where competitors have to bring their robots before a fight. How did those rules evolve? Munson: It’s part “necessity is the mother of invention” and part you just know the lithium technology is more volatile. We have a really smart team that helps us do the rules—there are some EEs on there and some mechanical engineers. They know about technology issues even before they hit the awareness of the general public. The warning shots were there from the beginning—lithium technology can burn, and it keeps on burning. We started out with your basic bucket full of sand and special fire extinguishers along the arena side and in the pit where people were fixing the robots. Every row had a bucket of sand and a protocol for disposing of the batteries properly and safely. But it quickly became obvious that if there’s a battery fire in the pit, with the smoke and whatnot, that’s a no-go. So we quickly pivoted away from that [to a separate] battery charging pit. We’ve seen batteries just go up, and they don’t happen in the main pit; they happen in the battery pit—which is a huge, huge win for us because that’s a place where we know exactly how to deal with that. There’s staff at the ready to put the fires out and deal with them. We also have a battery cool-down area for after a fight. When the batteries have just discharged massive amounts of energy, they’re hot and some of them are puffing. They get a full inspection. You can’t go back to the pit after your match. You have to go to the battery cool-down area—it’s outside, it’s got fans, it’s cool. A dedicated safety inspector is there inspecting the batteries, making sure they’re not on the verge of causing a fire or puffing in any kind of way. If it’s all good, they let them cool down and stay there for 10, 15 minutes, and then they can go back to the battery-charging tent, take the batteries out and recharge them, and then go back to fixing the robot. If the batteries are not good, they are disposed of properly. The technology has become more flexible, but how do you prevent competitors from just converging on a handful of optimal design solutions, and all start looking alike? Munson: That’s a constant struggle. Sometimes we win, and sometimes we lose. A lot of it is in the judging rules, the criteria. We’ve gone through so many iterations of the judging rules because builders love to put either a fork, a series of forks, or a wedge on their bot. Makes total sense because you can scoop the guy up and hit them with your weapon or launch them in the air. So okay, if you’re just wedging the whole fight, is that aggressive? Is that control? Is that damage? And so back in the day, we were probably more strict and ruled that if you all you do is just wedge, we actually count it against you. We’ve loosened up there. Now, if all you do is wedge, it only counts against you just a little bit. But you’ll never win the aggression category if all you’re going to do is wedge. Because a wedge can beat everything. We often saw the finals would be between a big gnarly spinner and a wedge. Wedges are a very effective, simple machine that can clean up in robot combat. So we’re tweaking how we count the effectiveness of wedges and our judging guide if the fight goes to judges. Meanwhile, we don’t want it to go to judges. We want to see a knockout. So we demand that you have to have an active weapon. You can’t just have a wedge. It has to be a robust, active weapon that can actually cause damage. You just can’t put a Home Depot drill on the top of your robot and call it a day. That was just something we knew we needed to have to push the sport forward. What seems to be happening is the vertical spinners are now sort of the dominant class. We don’t want the robots to be homogenized. That’s one of the reasons why we allow modifications during the actual tournament. Certain fans have gotten mad at us, like, “Why’d you let them add this thing during the middle of the tournament?” Because we want that. We want that spirit of ingenuity and resourcefulness. We want to break any idea of “vertical spinners will always win.” We want to see different kinds of fights because people will get bored otherwise. Even if there’s massive amounts of destruction, which always seems to excite us, if it’s the same kind of destruction over and over again, it starts to be like an explosion in Charlie’s Angels that I’ve seen 100 times, right? A lot of robots are modular now, where they can swap out a vertical spinner for a horizontal undercutter and so on. This will be a constant evolution for our entire history. If you ask me this question 20 years from now, I’m going to still be saying it’s a struggle!

  • Andrew Ng: Unbiggen AI
    by Eliza Strickland on 9. Februara 2022. at 15:31

    Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A. Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias. Andrew Ng on... What’s next for really big models The career advice he didn’t listen to Defining the data-centric AI movement Synthetic data Why Landing AI asks its customers to do the work The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way? Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions. When you say you want a foundation model for computer vision, what do you mean by that? Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them. What needs to happen for someone to build a foundation model for video? Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision. Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries. Back to top It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users. Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation. “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.” —Andrew Ng, CEO & Founder, Landing AI I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince. I expect they’re both convinced now. Ng: I think so, yes. Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.” Back to top How do you define data-centric AI, and why do you consider it a movement? Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data. When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline. The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up. You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them? Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn. When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set? Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system. “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.” —Andrew Ng For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance. Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training? Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle. One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way. When you talk about engineering the data, what do you mean exactly? Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity. For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow. Back to top What about using synthetic data, is that often a good solution? Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development. Do you mean that synthetic data would allow you to try the model on more data sets? Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category. “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.” —Andrew Ng Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data. Back to top To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment? Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data. One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory. How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up? Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations. In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists? So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work. Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains. Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement? Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it. Back to top This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”

  • How AI Will Change Chip Design
    by Rina Diane Caballar on 8. Februara 2022. at 14:00

    The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process. Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version. But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform. How is AI currently being used to design the next generation of chips? Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider. Heather GorrMathWorks Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI. What are the benefits of using AI for chip design? Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design. So it’s like having a digital twin in a sense? Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end. So, it’s going to be more efficient and, as you said, cheaper? Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering. We’ve talked about the benefits. How about the drawbacks? Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it's not going to be as accurate as that precise model that we’ve developed over the years. Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It's a case where you might have models to predict something and different parts of it, but you still need to bring it all together. One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge. How can engineers use AI to better prepare and extract insights from hardware or sensor data? Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start. One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI. What should engineers and designers consider when using AI for chip design? Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team. How do you think AI will affect chip designers’ jobs? Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip. How do you envision the future of AI and chip design? Gorr: It's very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.

  • Atomically Thin Materials Significantly Shrink Qubits
    by Dexter Johnson on 7. Februara 2022. at 16:12

    Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality. IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability. Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100. “We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.” The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit. Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C). Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another. As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance. In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates. “We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics. On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas. While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor. “What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.” This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits. “The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang. Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.

640px-NATO_OTAN_landscape_logo.svg-2627358850
BHTelecom_Logo
roaming
RIPE_NCC_Logo2015-1162707916
MON_4
mibo-logo
intel_logo-261037782
infobip
bhrt-zuto-logo-1595455966
elektro
eplus_cofund_text_to_right_cropped-1855296649
fmon-logo
h2020-2054048912
H2020_logo_500px-3116878222
huawei-logo-vector-3637103537
Al-Jazeera-Balkans-Logo-1548635272
previous arrowprevious arrow
next arrownext arrow