IEEE News

IEEE Spectrum IEEE Spectrum

  • “Great Capacity!” “Less Latency!”—How Wi-Fi 7 Achieves Both
    by Michael Koziol on 27. May 2022. at 19:48

    New generations of Wi-Fi have sprung onto the scene at a rapid pace in recent years. After a storied five-year presence, Wi-Fi 5 was usurped in 2019 by Wi-Fi 6, only for the latter to be toppled a year later in 2020 by an intermediate generation, Wi-Fi 6E. And now, just a couple years later, we’re on the verge of Wi-Fi 7. Wi-Fi 7 (the official IEEE standard is 802.11be) may only give Wi-Fi 6 a scant few years in the spotlight, but it’s not just an upgrade for the sake of an upgrade. Several new technologies—and some that debuted in Wi-Fi 6E but haven’t entirely yet come into their own—will allow Wi-Fi 7 routers and devices to make full use of an entirely new band of spectrum at 6 gigahertz. This spectrum—first tapped into with Wi-Fi 6E—adds a third wireless band alongside the more familiar 2.4-GHz and 5-GHz bands. New technologies called automated frequency coordination, multi-link operations, and 4K QAM (all described below) will further increase wireless capacity, reduce latency, and generally make Wi-Fi networks more flexible and responsive for users. It is now possible to superimpose more than 4,000 signals at once—4,096 to be exact. Automated frequency coordination (AFC) solves a thorny problem with the 6-GHz band in that, while Wi-Fi is the new kid in town, it’s moving into an otherwise well-staked-out portion of the spectrum. In the United States, for example, federal agencies like NASA and the Department of Defense often use the 6-GHz band to communicate with geostationary satellites. Weather radar systems and radio astronomers rely on this band a lot as well. And these incumbents really don’t appreciate errant Wi-Fi signals muscling in on their frequency turf. Fortunately, the preexisting uses of 6-GHz microwaves are largely predictable, localized, and stationary. So AFC allows Wi-Fi into the band by making it possible to coordinate with and work around existing use cases. “We’re looking at where all of these fixed services are located,” says Chris Szymanski, a director of product marketing at Broadcom. “We’re looking at the antenna patterns of these fixed services, and we’re looking at the direction they’re pointing.” All of this information is added into cloud-based databases. The databases will also run interference calculations, so that when a Wi-Fi 7 access point checks the database, it will be alerted to any incumbent operators—and their particulars—in its vicinity. AFC makes it possible for Wi-Fi 7 networks to operate around incumbents by preventing transmissions in bands that would interfere with nearby weather radar, radio telescopes, or others. At the same time, it frees up Wi-Fi 7 networks to broadcast at a higher power when they know there’s no preexisting spectrum user nearby to worry about. Szymanski says that Wi-Fi 7 networks will be able to use AFC to transmit on the 6-GHz band using 63 times as much power when the coast is clear than they would if they had to maintain a uniform low-level transmission power to avoid disturbing any incumbents. More power translates to better service over longer distances, more reliability, and greater throughput. AFC is not new to Wi-Fi 7. It debuted with Wi-Fi 6E, the incremental half-step generation between Wi-Fi 6 and Wi-Fi 7 that emerged as a consequence of the 6-GHz band becoming available in many places. With Wi-Fi 7, however, more classes of wireless devices will receive AFC certification, expanding its usefulness and impact. Wi-Fi 7 routers and cellphones are not quite right around the corner. Devices are still being built and certified. Multi-link operations (MLO) will take advantage of the fact that Wi-Fi’s existing 5-GHz band and new 6-GHz band are comparatively closer than the 2.4-GHz and 5-GHz bands are to each other. Wi-Fi access points have long had the ability to support transmissions over multiple wireless channels at the same time. With Wi-Fi 7, devices like cellphones and IoT devices will be able to access multiple channels at the same time. (Think about how you currently have to connect to either a 2.4-GHz network or a 5-GHz network when you’re joining a Wi-Fi network). MLO will allow a device to connect to both a 5-GHz channel and a 6-GHz channel at the same time and use both to send and receive data. This wasn’t really possible before the addition of the 6-GHz band, explains Andy Davidson, a senior director of product technology planning at Qualcomm. The 5-GHz and 6-GHz bands are close enough that they have functionally the same speeds. Trying the same trick with the 2.4-GHz and 5-GHz bands would drag down the effectiveness of the 5-GHz transmissions as they waited for the slower 2.4-GHz transmissions to catch up. This is especially clear in alternating multi-link, a type of MLO in which, as the name implies, a device alternates between two channels, sending portions of its transmissions on each (As opposed to simultaneous multi-link, in which the two channels are simply used in tandem). Using alternating multi-link with the 2.4-GHz and 5-GHz bands is like trying to run two trains at different speeds on one track. “If one of those trains is slow, especially if they’re very slow, it means your fast train can’t even do anything because it’s waiting for the slow train to complete” its trip, says Davidson. There’s also 4K QAM—short for quadrature amplitude modulation (More on the “4K” in a moment). At its core, QAM is a way of sending multiple bits of information in the same instant of a transmission by superimposing signals of different amplitudes and phases. The “4K” in 4K QAM means that it is possible to superimpose more than 4,000 signals at once—4,096 to be exact. 4K QAM is also not new to Wi-Fi 7, but Davidson says the new generation will make 4K QAM standard. Like multi-link operations and automated frequency coordination, 4K QAM increases capacity and, by extension, reduces latency. When Wi-Fi 7 becomes available, there will be differences between regions. The availability of spectrum varies between countries, depending on how their respective regulatory agencies have assigned out spectrum. For example, while multi-link operations in the United States will be able to use the channels at 5 GHz and 6 GHz, the latter won’t be available for Wi-Fi use in China. Instead, Wi-Fi devices in China can use two different channels in the 5-GHz band. Companies including Broadcom and Qualcomm have announced their Wi-Fi 7 components in recent weeks. That doesn’t mean Wi-Fi 7 routers and cellphones are right around the corner. Over the next months, those devices will be built and certified using the components from Broadcom, Qualcomm, and others. But the wait won’t be too long—Wi-Fi 7 devices will likely be available by the end of the year.

  • How the FCC Settles Radio-Spectrum Turf Wars
    by Mitchell Lazarus on 27. May 2022. at 19:00

    You’ve no doubt seen the scary headlines: Will 5G Cause Planes to Crash? They appeared late last year, after the U.S. Federal Aviation Administration warned that new 5G services from AT&T and Verizon might interfere with the radar altimeters that airplane pilots rely on to land safely. Not true, said AT&T and Verizon, with the backing of the U.S. Federal Communications Commission, which had authorized 5G. The altimeters are safe, they maintained. Air travelers didn’t know what to believe. Another recent FCC decision had also created a controversy about public safety: okaying Wi-Fi devices in a 6-gigahertz frequency band long used by point-to-point microwave systems to carry safety-critical data. The microwave operators predicted that the Wi-Fi devices would disrupt their systems; the Wi-Fi interests insisted they would not. (As an attorney, I represented a microwave-industry group in the ensuing legal dispute.) Whether a new radio-based service will interfere with existing services in the same slice of the spectrum seems like a straightforward physics problem. Usually, though, opposing parties’ technical analyses give different results. Disagreement among the engineers then opens the way for public safety to become just one among several competing interests. I’ve been in the thick of such arguments, so I wanted to share how these issues arise and how they are settled. Battling for Bandwidth Not all radio spectrum is created equal. Lower frequencies travel farther and propagate better through buildings and terrain. Higher frequencies offer the bandwidth to carry more data, and work well with smaller antennas. Every radio-based application has its own needs and its own spectral sweet spot. Suitable spectrum for mobile data—4G, 5G, Wi-Fi, Bluetooth, many others—runs from a few hundred megahertz to a few gigahertz. Phones, tablets, laptops, smart speakers, Wi-Fi-enabled TVs and other appliances, Internet-of-things devices, lots of commercial and industrial gear—they all need these same frequencies. The problem is that this region of spectrum has been fully occupied for decades. So when a new service like 5G appears, or an older one like Wi-Fi needs room to expand, the FCC has two options. For a licensed service like 5G, the FCC generally clears incumbent users from a range of frequencies—either repacking them into other frequencies nearby or relocating them to a different part of the spectrum—and then auctions the freed-up spectrum to providers of the new service. To accommodate an unlicensed service like Wi-Fi, the FCC overlays the new users onto the same frequencies as the incumbents, usually at lower power. The FCC tries to write technical rules for the new or expanded service that will leave the incumbents mostly unaffected. It is commonplace for newcomers to complain that any interference they cause is not their fault, attributing it to inferior incumbent receivers that fail to screen out unwanted signals. This argument usually fails. The newcomer must deal with the spectrum and its occupants as it finds them. Strategies for accomplishing that task vary. Alternative Realities This radio tower, located near downtown Los Angeles, is bedecked with 6-GHz fixed-microwave antennas that serve area police and fire departments.George Rose/Getty Images Congress prohibits the FCC (and other federal agencies) from changing the regulatory ground rules without first soliciting and considering public input. On technical issues, that input comes mostly from the affected industries after the FCC outlines its tentative plans in a Notice of Proposed Rulemaking. There follows a back-and-forth exchange of written submissions posted to the FCC’s website, typically lasting a year or more. Ordinarily, parties can also make in-person presentations to the FCC staff and the five commissioners, if they post summaries of what they say. Sometimes the staff uses these meetings to test possible compromises among the parties. All this openness and transparency has a big exception: Other federal agencies, like the FAA, can and sometimes do submit comments to the FCC’s website, but they also have a back channel to deliver private communications. The submissions in a spectrum proceeding generally make two kinds of points. First, the newcomers and the incumbents both present data to impress the FCC with their respective services’ widespread demand, importance to the economy, and utility in promoting education, safety, and other public benefits. Second, both the proponents and opponents of a new frequency usage submit engineering studies and simulations, sometimes running to hundreds of pages. Predictably, the two parties’ studies come to opposite conclusions. The proponents show the new operations will have no harmful effect on incumbents, while the incumbents demonstrate that they will suffer devastating interference. Each party responds with point-by-point critiques of the other side’s studies and may carry out counter-studies for further proof the other side is wrong. How do such alternative realities arise? It’s not because they are based on different versions of Maxwell’s equations. The two sides’ studies usually disagree because they start with differing assumptions about the newcomer's transmitter characteristics, the incumbent's receiver characteristics, and the geometries and propagation that govern interaction between the two. Small changes to some of these factors can produce large changes in the results. Rather than settle anything, experiments just add fuel to the controversy. Sometimes the parties, the FCC, or another government agency may conduct hardware tests in the lab or in the field to assess the degree of interference and its effects. Rather than settle anything, though, these experiments just add fuel to the controversy. Parties disagree on whether the test set-up was realistic, whether the data were analyzed correctly, and what the results imply for real-world operations. When, for example, aviation interests ran tests that found 5G transmissions caused interference to radio altimeters, wireless carriers vigorously challenged their results. In contrast, there was no testing in the 6-GHz Wi-Fi proceeding, where the disagreements turned on theoretical analyses and simulations. Further complicating matters, the disputed studies and tests do not predict interference as a binary yes/no but as differing probabilities for various degrees of interference. And the parties involved often disagree on whether a given level of interference is harmless or will cause the victim receiver to malfunction. Reaching a decision on interference issues requires the FCC to make its way through a multi-dimensional maze of conflicting uncertainties. Here are some concrete issues that illuminate this all-too-common dynamic. Fixed Ideas Those ubiquitous sideways-facing dishes on towers and buildings are fixed‑microwave antennas. Equipment of this kind has operated reliably since the 1950s. The 6-GHz band, the lowest-frequency microwave band available today, is the only one capable of 100-kilometer hops, making it indispensable. Along with more pedestrian uses, the band carries safety-critical information: to coordinate trains, control pressure in oil and gas pipelines, balance the electric grid, manage water utilities, and route emergency telephone calls. The red lines on this map of the 48 contiguous U.S. states show the location of existing 6-gigahertz fixed-microwave links, as recorded by Comsearch, which helps companies to avoid issues with radio interference. These links connect people in almost all areas, including far offshore in the Gulf of Mexico, where drilling platforms are common.Comsearch Four years ago, when the FCC proposed adding Wi-Fi to the 6-GHz band, all sides agreed that the vast majority of Wi-Fi devices would cause no trouble. Statistically, most would be outside the microwave antennas’ highly directional main beams, or on the wrong frequency, or shielded by buildings, terrain, and ground clutter. The dispute centered on the small proportion of devices that might transmit on a frequency in use while being in the line-of-sight of a microwave antenna. The Wi-Fi proponents projected just under a billion devices, operating among 100,000 microwave receivers. The opponents pointed out that even a very small fraction of the many new transmitters could cause troubling numbers of interference events. To mitigate the problem, the FCC adopted rules for an Automatic Frequency Control (AFC) system. A Wi-Fi device must either report its location to a central AFC database, which assigns it non-interfering frequencies for that location, or operate close to and under the control of an AFC-guided device. The AFC system will not be fully operational for another year or two, and disagreements persist about the details of its eventual operation. More controversially, the FCC also authorized Wi-Fi devices without AFC, transmitting at will on any 6-GHz frequency from any geographic location—but only indoors and at no more than one-quarter of the maximum AFC-controlled power. The Wi-Fi proponents’ technical studies showed that attenuation from building walls would prevent interference. The microwave operators’ studies showed the opposite: that interference from uncontrolled indoor devices was virtually certain. How could engineers, using the same equations, come to such different conclusions? These are a few of the ways in which their analyses differed: Wi-Fi device power: A Wi-Fi device transmits in short bursts, active about 1/250th of the time, on average. The Wi-Fi proponents scaled down the power by a like amount, treating a device that transmits intermittently at, say, 250 milliwatts as though it transmitted continuously at 1 mW. The microwave operators argued that interference can occur only while the device is actually transmitting, so they calculated using the full power. Building attenuation: A 6-GHz signal encounters substantial attenuation from concrete building walls and thermal windows, less from wood walls, and practically none from plain-glass windows. The Wi-Fi proponents took weighted averages over several building materials to calculate typical wall attenuations. The microwave operators reasoned that interference was most likely from an atypical Wi-Fi device behind plain glass, and they calculated accordingly, assuming a minimal amount of attenuation. Path loss: In estimating the signal loss from a building that houses a Wi-Fi device to a microwave-receiving antenna, the Wi-Fi proponents used a standard propagation model that incorporates attenuation due to other buildings, ground clutter, and the like. The microwave operators were most concerned about a device located with open air between the building and the antenna, so they used free-space propagation in their calculations. Using their preferred starting assumptions, the Wi-Fi proponents proved that Wi‑Fi devices over a wide range of typical situations present no risk of interference. Using a different set of assumptions, the microwave operators proved there is a large risk of interference from a small proportion of Wi-Fi devices in atypical locations, arguing that multiplying that small proportion by almost a billion Wi-Fi devices made interference virtually certain. Up in the Air Americans want their smartphones and tablets to have fast Internet access everywhere. That takes a lot of spectrum. Congress passed a statute in 2018 that told the FCC to find more—and specifically to consider 3.7 to 4.2 GHz, part of the C-band, used since the 1960s to receive satellite signals. The FCC partitioned the band in 2020, allocating 3.7 to 3.98 GHz for 5G mobile data. In early 2021, it auctioned the new 5G frequencies for US $81 billion, mostly to Verizon and AT&T. The auction winners were also expected to pay the satellite providers around $13 billion to compensate them for the costs of moving to other frequencies. A nearby band at 4.2 to 4.4 GHz serves radar altimeters (also called radio altimeters), instruments that tell a pilot or an automatic landing system how high the aircraft is above the ground. The altimeter works by emitting downward radio waves that reflect off the ground and back up to a receiver in the device. The time for the round trip gives the altitude. Large planes operate two or three altimeters simultaneously, for redundancy. Even though the altimeters use frequencies separated from the 5G band, they can still receive interference from 5G. That’s because every transmitter, including ones used for 5G, emits unwanted signals outside its assigned frequencies. Every receiver is likewise sensitive to signals outside its intended range, some more than others. Interference can occur if energy from a 5G transmitter falls within the sensitivity range of the receiver in an altimeter. To make way for new 5G cellular services, the Federal Communications Commission reallocated part of the radio spectrum. That reallocation resulted in 5G transmissions that are close in frequency to a band used by aircraft radar altimeters. The FCC regulates transmitter out-of-band emissions. In contrast, it has few rules on receiver out-of-band reception (although it recently opened a discussion on whether to expand them). Manufacturers generally design receivers to function reliably in their expected environments, which can leave them vulnerable if a new service appears in formerly quiet spectrum near the frequencies they receive on. Aviation interests feared this outcome with the launch of C-band 5G, one citing the possibility of “catastrophic impact with the ground, leading to multiple fatalities.” The FCC’s 5G order tersely dismissed concerns about altimeter interference, although it invited the aviation industry to study the matter further. The industry did so, renewing its concerns and requesting that the wireless carriers refrain from using 5G near airports. But this came after the wireless carriers had committed almost $100 billion and begun building out facilities. Much as in the case of 6-GHz Wi-Fi, the 5G providers and aviation interests reached different predictions about interference by starting with different assumptions. Some key areas of disagreement were: 5G out-of-band emissions: The aviation interests assumed higher levels than the wireless carriers, which said the numbers in the aviation study levels exceeded FCC limits. The FCC must regulate “in the public interest,” but the commissioners have to determine what that means in each case. Off-channel sensitivity in altimeter receivers: There are several makes and models of altimeters in use, having varying receiver characteristics, leading to disagreements on which to include in the studies. Altimeters in the same or other aircraft nearby. A busy airport has a lot of altimeters operating. Wireless carriers said these would overpower 5G interference. Aviation interests countered that multiple altimeters in the area would consume one another’s interference margin and leave them all more vulnerable to 5G. Aircraft pitch and roll: Aviation interests argued that the changing angles of the aircraft as it approaches the runway can expose the altimeter receivers to more 5G signal. Reflectivity of the ground: Aviation interests favored modeling with lower values of reflectivity, which reduce the received signal strength at the altimeter and hence increase its susceptibility to 5G interference. The carriers temporarily paused 5G rollout near some airports, and the airlines canceled and rescheduled some flights. At this writing, the FAA is evaluating potentially affected aircraft, altimeters, and airport systems. Most likely, 5G will prevail. In the extremely improbable event that the FAA and the FCC were to agree that C-band 5G cannot operate safely near airports, the wireless carriers presumably would be entitled to a partial refund of their $81 billion auction payments. These radio towers, which sit atop Black Mountain in Carmel Valley, Calif., include many drumlike antennas used for 6-gigahertz fixed-microwave links.Shutterstock Hard Decisions Making complicated trade-offs has long been the job of the five FCC commissioners. They are political appointees, nominated by the president and confirmed by the Senate. The four now in office (there is a vacancy) are all lawyers. It has been decades since a commissioner had a technical background. The FCC has highly capable engineers on staff, but only in advisory roles. The commissioners have no obligation to take their advice. Congress requires the FCC to regulate “in the public interest,” but the commissioners must determine what that means in each case. Legally, they can reach any result that has at least some support in the submissions, even if other submissions more strongly support an opposite result. Submissions to the FCC in both the 6-GHz and 5G matters conveyed sharp disagreement as to how much safety protection the public interest requires. To fully protect 6-GHz microwave operations against interference from the small fraction of Wi-Fi devices in the line-of-sight of the microwave receivers would require degrading Wi-Fi service for large numbers of people. Similarly, eliminating any chance whatsoever of a catastrophic altimeter malfunction due to 5G interference might require turning off C-band 5G in some heavily populated areas. The orders that authorized 6-GHz Wi-Fi and C-band 5G did not go that far and did not claim they had achieved zero risk. The order on 5G stated that altimeters had “all due protection.” In the 6-GHz case, with a federal appeals court deferring to its technical expertise, the FCC said it had “reduce[d] the possibility of harmful interference to the minimum that the public interest requires.” These formulations make clear that safety is just one of several elements in the mix of public interests considered. Commissioners have to balance the goals of minimizing the risk of plane crashes and pipeline explosions against the demand for ubiquitous Internet access and Congress’s mandate to repurpose more spectrum. In the end, the commissioners agreed with proponents’ claims that the risk of harmful interference from 6-GHz Wi-Fi is “insignificant,” although not zero, and similarly from 5G, not “likely…under…reasonably foreseeable scenarios”—conclusions that made it possible to offer the new services. People like to think that the government puts the absolute safety of its citizens above all else. Regulation, though, like engineering, is an ever-shifting sequence of trade-offs. The officials who set highway speed limits know that lower numbers will save lives, but they also take into account motorists’ wishes to get to their destinations in a timely way. So it shouldn’t come as a great surprise that the FCC performs a similar balancing act.

  • Video Friday: Robot Training
    by Evan Ackerman on 27. May 2022. at 17:41

    Your weekly selection of awesome robot videos Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. IEEE ARSO 2022: 28–30 May 2022, LONG BEACH, CALIF. RSS 2022: 21 June–1 July 2022, NEW YORK CITY ERF 2022: 28–30 June 2022, ROTTERDAM, NETHERLANDS RoboCup 2022: 11–17 July 2022, BANGKOK IEEE CASE 2022: 20–24 August 2022, MEXICO CITY CLAWAR 2022: 12–14 September 2022, AZORES, PORTUGAL CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND Enjoy today’s videos! Finally, after the first Rocky movie in 1976, the Robotic Systems Lab presents a continuation of the iconic series. Our transformer robot visited Philly in 2022 as part of the International Conference on Robotics and Automation. [ Swiss-Mile ] Human cells grown in the lab could one day be used for a variety of tissue grafts, but these cells need the right kind of environment and stimulation. New research suggests that robot bodies could provide tendon cells with the same kind of stretching and twisting as they would experience in a real human body. It remains to be seen whether using robots to exercise human cells results in a better tissue for transplantation into patients. [ Nature ] Researchers from Carnegie Mellon University took an all-terrain vehicle on wild rides through tall grass, loose gravel and mud to gather data about how the ATV interacted with a challenging, off-road environment. The resulting dataset, called TartanDrive, includes about 200,000 of these real-world interactions. The researchers believe the data is the largest real-world, multimodal, off-road driving dataset, both in terms of the number of interactions and types of sensors. The five hours of data could be useful for training a self-driving vehicle to navigate off road. [ CMU ] Chengxu Zhou from the University of Leeds writes, “we have recently done a demo with one operator teleoperating two legged manipulator for a bottle opening task.” [ Real Robotics ] Thanks, Chengxu! We recently hosted a Youth Fly Day, bringing together 75 Freshman students from ICA Cristo Rey All Girls Academy of San Francisco for a day of hands-on exposure to and education about drones. It was an exciting opportunity for the Skydio team to help inspire the next generation of women pilots and engineers. [ Skydio ] Legged robotic systems leverage ground contact and the reaction forces they provide to achieve agile locomotion. However, uncertainty coupled with the discontinuous nature of contact can lead to failure in real-world environments with unexpected height variations, such as rocky hills or curbs. To enable dynamic traversal of extreme terrain, this work introduces the utilization of proprioception to estimate and react to unknown hybrid events and elevation changes and a two-degree-of-freedom tail to improve control independent of contact. If you like this and are in the market for a new open source quadruped controller, CMU’s got that going on, too. [ Robomechanics Lab ] A bolt-on 360 camera kit for your drone that costs $430. [ Insta360 ] I think I may be too old to have any idea what’s going on here. [ Neato ] I’m not the biggest fan of the way the Stop Killer Robots folks go about trying to make their point, but they have a new documentary out, so here you go. [ Immoral Code ] This symposium hosted by the U.S. Department of Commerce and National Institute of Standards and Technology, Stanford Institute for Human-Centered Artificial Intelligence (HAI), and the FinRegLab, brought together leaders from government, industry, civil society, and academia to explore potential opportunities and challenges posed by artificial intelligence and machine learning deployment across different economic sectors, with a particular focus on financial services and healthcare. [ Stanford HAI ]

  • Get the Coursera Campus Skills Report 2022
    by Coursera on 27. May 2022. at 16:39

    Get comprehensive insights into higher education skill trends based on data from 3.8M registered learners on Coursera, and learn clear steps you can take to ensure your institution's engineering curriculum is aligned with the needs of the current and future job market. Download the report now!

  • Writing UVM/SystemVerilog Testbenches for Analog/Mixed-Signal Verification
    by Scientific Analog, Inc. on 27. May 2022. at 15:35

    Learn how to write reusable SystemVerilog testbenches for analog/mixed-signal IPs, using standardized UVM components and Scientific Analog's XMODEL! Register for this free webinar now! Join this webinar on how to write a UVM testbench for analog/mixed-signal circuits. UVM (Universal Verification Methodology) is a framework of standardized SystemVerilog classes to build reusable and scalable testbenches for digital designs, and it can be extended to verifying analog circuits simply by using a fixture module that generates analog stimuli and measures analog responses with Scientific Analog's XMODEL. Using a digitally-programmable audio bandpass filter as an example, we'll show how to write a UVM testbench that measures the filter's transfer gain at randomly-chosen frequencies, collects the results in a scoreboard until the desired coverage is met, and checks the supply current and bias voltages during power-down with assertions. The webinar will start with an intuitive yet systematic introduction to UVM. Speaker Charles DančakVerification Instructor and ConsultantCharles Dančak is a trainer and consultant based in Silicon Valley. He holds two MS degrees, one in electrical engineering and one in solid-state physics. Charles began his career as a technology engineer in one of Intel's wafer fabs and spent ten years at Synopsys developing hands-on courses on HDL-based design, simulation, and DFT. He introduced the first SystemVerilog workshop in the University of California Extension system in 2007 and still teaches SystemVerilog online, currently with UC San Diego Extension (ECE-40301). Recently, Charles presented a paper on UVM for analog/mixed-signal verification at DVCon U.S. 2022.

  • Charles Babbage’s Difference Engine Turns 200
    by Allison Marsh on 27. May 2022. at 15:00

    It was an idea born of frustration, or at least that’s how Charles Babbage would later recall the events of the summer of 1821. That fateful summer, Babbage and his friend and fellow mathematician John Herschel were in England editing astronomical tables. Both men were founding members of the Royal Astronomical Society, but editing astronomical tables is a tedious task, and they were frustrated by all of the errors they found. Exasperated, Babbage exclaimed, “I wish to God these calculations had been executed by steam.” To which Herschel replied, “It is quite possible.“ Babbage and Herschel were living in the midst of what we now call the Industrial Revolution, and steam-powered machinery was already upending all types of business. Why not astronomy too? Babbage set to work on the concept for a Difference Engine, a machine that would use a clockwork mechanism to solve polynomial equations. He soon had a small working model (now known as Difference Engine 0), and on 14 June 1822, he presented a one-page “Note respecting the Application of Machinery to the Calculation of Astronomical Tables” to the Royal Astronomical Society. His note doesn’t go into much detail—it’s only one page, after all—but Babbage claimed to have “repeatedly constructed tables of squares and triangles of numbers” as well as of the very specific formula x2 + x + 41. He ends his note with much optimism: “From the experiments I have already made, I feel great confidence in the complete success of the plans I have proposed.” That is, he wanted to build a full-scale Difference Engine. Perhaps Babbage should have tempered his enthusiasm. His magnificent Difference Engine proved far more difficult to build than his note suggested. It wasn’t for lack of trying, or lack of funds. For Babbage managed to do something else that was almost as unimaginable: He convinced the British government to fund his plan. The government saw the value in a machine that could calculate the many numerical tables used for navigation, construction, finance, and engineering, thereby reducing human labor (and error). With an initial investment of £1,700 in 1823 (about US $230,000 today), Babbage got to work. The Difference Engine was a calculator with 25,000 parts The 19th-century mathematician Charles Babbage’s visionary contributions to computing were rediscovered in the 20th century.The Picture Art Collection/Alamy Babbage based his machine on the mathematical method of finite differences, which allows you to solve polynomial equations in a series of iterative steps that compare the differences in the resulting values. This method had the advantage of requiring simple addition only, which was easier to implement using gear wheels than one based on multiplication and division would have been. (The Computer History Museum has an excellent description of how the Difference Engine works.) Although Babbage had once dreamed of a machine powered by steam, his actual design called for a human to turn a crank to advance each iteration of calculations. Difference Engine No. 1 was divided into two main parts: the calculator and the printing mechanism. Although Babbage considered using different numbering systems (binary, hexadecimal, and so on), he decided to stick with the familiarity of the base-10 decimal system. His design in 1830 had a capacity of 16 digits and six orders of difference. Each number value was represented by its own wheel/cam combination. The wheels represented only whole numbers; the machine was designed to jam if a result came out between whole numbers. As the calculator cranked out the results, the printing mechanism did two things: It printed a table while simultaneously making a stereotype mold (imprinting the results in a soft material such as wax or plaster of paris). The mold could be used to make printing plates, and because it was made at the same time as the calculations, there would be no errors introduced by humans copying the results. Difference Engine No. 1 contained more than 25,000 distinct parts, split roughly equally between the calculator and the printer. The concepts of interchangeable parts and standardization were still in their infancy. Babbage thus needed a skilled craftsman to manufacture the many pieces. Marc Isambard Brunel, part of the father-and-son team of engineers who had constructed the first tunnel under the Thames, recommended Joseph Clement. Clement was an award-winning machinist and draftsman whose work was valued for its precision. Babbage and Clement were both brilliant at their respective professions, but they often locked horns. Clement knew his worth and demanded to be paid accordingly. Babbage grew concerned about costs and started checking on Clement’s work, which eroded trust. The two did produce a portion of the machine [shown at top] that was approximately one-seventh of the complete engine and featured about 2,000 moving parts. Babbage demonstrated the working model in the weekly soirees he held at his home in London. The machine impressed many of the intellectual society set, including a teenage Ada Byron, who understood the mathematical implications of the machine. Byron was not allowed to attend university due to her sex, but her mother supported her academic interests. Babbage suggested several tutors in mathematics, and the two remained correspondents over their lifetimes. In 1835, Ada married William King. Three years later, when he became the first Earl of Lovelace, Ada became Countess of Lovelace. (More about Ada Lovelace shortly.) Despite the successful chatter in society circles about Babbage’s Difference Engine, trouble was brewing—cost overruns, political opposition to the project, and Babbage and Clement’s personality differences, which were causing extreme delays. Eventually, the relationship between Babbage and Clement reached a breaking point. After yet another fight over finances, Clement abruptly quit in 1832. The Analytical Engine was a general-purpose computer Ada Lovelace championed Charles Babbage’s work by, among other things, writing the first computer algorithm for his unbuilt Analytical Engine.Interim Archives/Getty Images Despite these setbacks, Babbage had already started developing a more ambitious machine: the Analytical Engine. Whereas the Difference Engine was designed to solve polynomials, this new machine was intended to be a general-purpose computer. It was composed of several smaller devices: one to list the instruction set (on punch cards popularized by the Jacquard loom); one (called the mill) to process the instructions; one (which Babbage called the store but we would consider the memory) to store the intermediary results; and one to print out the results. In 1840 Babbage gave a series of lectures in Turin on his Analytical Engine, to much acclaim. Italian mathematician Luigi Federico Menabrea published a description of the engine in French in 1842, “Notions sur la machine analytique.” This is where Lady Lovelace returns to the story. Lovelace translated Menabrea’s description into English, discreetly making a few corrections. The English scientist Charles Wheatstone, a friend of both Lovelace and Babbage, suggested that Lovelace augment the translation with explanations of the Analytical Engine to help advance Babbage’s cause. The resulting “Notes,” published in 1843 in Richard Taylor’s Scientific Memoirs, was three times the length of Menabrea’s original essay and contained what many historians consider the first algorithm or computer program. It is quite an accomplishment to write a program for an unbuilt computer whose design was still in flux. Filmmakers John Fuegi and Jo Francis captured Ada Lovelace’s contributions to computing in their 2003 documentary Ada Byron Lovelace: To Dream Tomorrow. They also wrote a companion article published in the IEEE Annals of the History of Computing, entitled “Lovelace & Babbage and the Creation of the 1843 ‘Notes’.” Although Lovelace’s translation and “Notes” were hailed by leading scientists of the day, they did not win Babbage any additional funding. Prime Minister Robert Peel had never been a fan of Babbage’s; as a member of Parliament back in 1823, he had been a skeptic of Babbage’s early design. Now that Peel was in a position of power, he secretly solicited condemnations of the Difference Engine. In a stormy meeting on 11 November 1842, the two men argued past each other. In January 1843, Babbage was informed that Parliament was sending the finished portion of Difference Engine No. 1 to the King’s College Museum. Two months later, Parliament voted to withdraw support for the project. By then, the government had spent £17,500 (about US $3 million today) and waited 20 years and still didn’t have a working machine. You could see why Peel thought it was a waste. But Babbage, perhaps reinvigorated by his work on the Analytical Engine, decided to return to the Difference Engine in 1846. Difference Engine No. 2 required only 8,000 parts and had a much more elegant and efficient design. He estimated it would weigh 5 tons and measure 11 feet long and 7 feet high. He worked for another two years on the machine and left 20 detailed drawings, which were donated to the Science Museum after he died in 1871. A modern team finally builds Babbage’s Difference Engine In 1985, a team at the Science Museum in London set out to build the streamlined Difference Engine No. 2 based on Babbage’s drawings. The 8,000-part machine was finally completed in 2002.Science Museum Group Although Difference Engine No. 2, like all the other engines, was never completed during Babbage’s lifetime, a team at the Science Museum in London set out to build one. Beginning in 1985, under the leadership of Curator of Computing Doron Swade, the team created new drawings adapted to modern manufacturing techniques. In the process, they sought to answer a lingering question: Was 19th-century precision a limiting factor in Babbage’s design? The answer is no. The team concluded that if Babbage had been able to secure enough funding and if he had had a better relationship with his machinist, the Difference Engine would have been a success. That said, some of the same headaches that plagued Babbage also affected the modern team. Despite leaving behind fairly detailed designs, Babbage left no introductory notes or explanations of how the pieces worked together. Much of the groundbreaking work interpreting the designs was done by Australian computer scientist and historian Allan G. Bromley, beginning in 1979. Even so, the plans had dimension inconsistencies, errors, and entire parts omitted (such as the driving mechanism for the inking), as described by Swade in a 2005 article for the IEEE Annals of the History of Computing. The team had wanted to complete the Difference Engine by 1991, in time for the bicentenary of Babbage’s birth. They did finish the calculating section by then. But the printing and stereotyping section—the part that would have alleviated all of Babbage’s frustrations in editing those astronomical tables—took another nine years. The finished product is on display at the Science Museum. A duplicate engine was built with funding from former Microsoft chief technology officer Nathan Myhrvold. The Computer History Museum displayed that machine from 2008 to 2016, and it now resides in the lobby of Myhrvold’s Intellectual Ventures in Bellevue, Wash. The title of the textbook for the very first computer science class I ever took was The Analytical Engine. It opened with a historical introduction about Babbage, his machines, and his legacy. Babbage never saw his machines built, and after his death, the ideas passed into obscurity for a time. Over the course of the 20th century, though, his genius became more clear. His work foreshadowed many features of modern computing, including programming, iteration, looping, and conditional branching. These days, the Analytical Engine is often considered an invention 100 years ahead of its time. It would be anachronistic and ahistorical to apply today’s computer terminology to Babbage’s machines, but he was clearly one of the founding visionaries of modern computing. Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the June 2022 print issue as “The Clockwork Computer."

  • Inventor of the First AI System That Could Read Handwriting Dies at 72
    by Joanna Goodrich on 26. May 2022. at 18:00

    University at Buffalo Sargur “Hari” Srihari Pioneer of computational forensics Life Fellow, 72; died 8 March Srihari helped create an artificial intelligence system in 1991 that enabled machines to read handwritten letters. The U.S. Postal Service still uses the system to sort mail. Srihari was a pioneer in the field of computational forensics who in 2002 developed CEDAR-FOX, a software system that identifies people through their handwriting. He was a professor of computer science and engineering for more than 40 years. He taught at the State University of New York as well as the University of Buffalo, where he founded its Center of Excellence for Document Analysis and Recognition. The faculty and students use the CEDAR research lab to work on technologies involving pattern recognition, machine learning, data mining, information retrieval, and computational linguistics. It was at CEDAR where Srihari helped develop the AI system. The U.S. Postal Service provided the program with more than US $60 million in funding during the project’s 25 years. In 2002 Srihari created CEDAR-FOX, which has been updated to allow the system to identify people through their fingerprints and shoe prints. Srihari held seven U.S. patents. Because of his expertise, Srihari was asked in 2007 to serve on the U.S. National Academy of Sciences’ committee on identifying the needs of the forensic science community, the only computer scientist on the body. It produced a report in 2009 about how the U.S. criminal justice system could strengthen its use of forensic science. Srihari received bachelor’s degrees in physics and mathematics in 1967 from Bangalore University in India. He also earned a bachelor’s degree in electrical communication engineering in 1970 from the Indian Institute of Science, in Bangalore. Srihari went on to earn a Ph.D. in computer and information science in 1976 from Ohio State University, in Columbus. Charles H. Gager Former head of Mitre’s space surveillance systems Member, 91; died 24 March Gager joined the research and engineering division of AIL, in St. James, N.Y., in 1951. There he conducted research in radar techniques and helped develop technologies such as moving-target identification equipment, monopulse radar, and high-resolution radar. He left the company in 1979 to join The Mitre Corp. in McLean, Va., where he helped develop surveillance sensors and technology for electronic warfare and tactical defense measures. He was promoted in 1984 to head the company’s space surveillance systems department. After he retired, he and his wife moved to Norwell, Mass., and he became an active IEEE volunteer. He also taught a course about the history and evolution of U.S. intelligence operations for Harvard’s Institute for Learning in Retirement. Gager received a bachelor’s degree in electrical engineering in 1950 from the Polytechnic Institute of Brooklyn (now the New York University Tandon School of Engineering). Thomas K. Ishii Founder of the IEEE Microwave Theory and Techniques Society’s Milwaukee Section Life member, 94; died 27 December Ishii was an active IEEE volunteer who established the IEEE Microwave Theory and Techniques Society Milwaukee Section. He served as an associate editor of IEEE Transactions on Circuits and Systems from 1989 to 1991. He served as a consultant for several companies including Wisconsin Electric Power, Honeywell, and Johnson Controls, as well as a number of law firms. Ishii received a bachelor’s degree and a Ph.D. in engineering from Nihon University, in Tokyo. He stayed on as an electrical engineering professor after graduating in 1950. He left six years later to pursue a second master’s degree and a doctorate at the University of Wisconsin-Madison. He graduated in 1959 and joined Marquette University, in Milwaukee, as a professor. He retired in 1998 and was named professor emeritus. He held two U.S. patents and three Japanese patents for microwave devices. Ishii was honored with several awards including the 2000 IEEE Millennium Medal, the 1984 IEEE Centennial Medal, and the 1969 T.C. Burnum IEEE Milwaukee Section Memorial Award. Leland Ross Megargel Electrical engineer Life member, 93; died 13 November Megargel worked as an electrical engineer for several companies including General Electric, Valley Forge, and International Signal and Control. After graduating in 1945 from Lake Ariel Consolidated School, in Pennsylvania, he enlisted in the U.S. Army. He was stationed in Japan and helped with the country’s reconstruction projects following World War II. He was honorably discharged in 1947. He was granted several U.S. patents. Megargel received a bachelor’s degree in electrical engineering in 1951 from Pennsylvania State University. Mirela Sechi Moretti Annoni Notare Editorial advisory board member of The Institute Senior member, 53; died 14 April 2021 Notare was a professor at the Universidade Federal de Santa Catarina, in Florianópolis, Brazil. She was an active IEEE member for 25 years, serving on several boards and committees including The Institute’s editorial advisory board. She was a member of the Region 9 NoticIEEEro newsletter committee and was on the editorial staff of IEEE Latin America Transactions.

  • Fundamental Energy Transitions Can Take a Century
    by Vaclav Smil on 26. May 2022. at 15:00

    One hundred and forty years ago, Thomas Edison began generating electricity at two small coal-fired stations, one in London (Holborn Viaduct), the other in New York City (Pearl Street Station). Yet although electricity was clearly the next big thing, it took more than a lifetime to reach most people. Even now, not all parts of the world have easy access to it. Count this slow rollout as one more reminder that fundamental systemic transitions are protracted affairs. Such transitions tend to follow an S-shaped curve: Growth rates shift from slow to fast, then back to slow again. I will demonstrate this by looking at a few key developments in electricity generation and residential consumption in the United States, which has reliable statistics for all but the earliest two decades of the electric period. In 1902, the United States generated just 6 terawatt-hours of electricity, and the century-plus-old trajectory shows a clear S-curve. By 1912, the output was 25 TWh, by 1930 it was 114 TWh, by 1940 it was 180 TWh, and then three successive decadal doublings lifted it to almost 1,600 TWh by 1970. During the go-go years, the 1930s was the only decade in which gross electricity generation did not double, but after 1970 it took two decades to double, and from 1990 to 2020, the generation rose by only one-third. As the process began to mature, the rising consumption of electricity was at first driven by declining prices, and then by the increasing variety of uses for electricity. The impressive drop in inflation-adjusted prices of electricity ended by 1970, and electricity generation reached a plateau, at about 4,000 TWh per year, in 2007. The early expansion of generation was destined for industry—above all for the conversion from steam engines to electric motors—and for commerce. Household electricity use remained restrained until after World War II. Household electricity use remained restrained until after World War II. In 1900, fewer than 5 percent of all households had access to electricity; the biggest electrification jump took place during the 1920s, when the share of dwellings with connections rose from about 35 percent to 68 percent. By 1956, the diffusion was virtually complete, at 98.8 percent. But access did not correlate strongly with use: Residential consumption remained modest, accounting for less than 10 percent of the total generation in 1930, and about 13 percent on the eve of World War II. In the 1880s, Edison light bulbs (inefficient and with low luminosity) were the first widely used indoor electricity converters. Lighting remained the dominant use for electricity in the household for the next three decades. It took a long time for new appliances to make a difference, because there were significant gaps between the patenting and introduction of new appliances—including the electric iron ( 1903), the vacuum cleaner (1907), the toaster (1909), the electric stove (1912), the refrigerator (1913)—and their widespread ownership. Radio was adopted the fastest of all: 75 percent of households had it by 1937. The same dominant share was reached by refrigerators and stoves only in the 1940s—dishwashers by 1975, color TVs by 1977, and microwave ovens by 1988. Again, as expected, these diffusions followed more or less orderly S-curves. Rising ownership of these and a range of other heavy electricity users drove the share of residential consumption to 25 percent by the late 1960s, and to about 40 percent in 2020. This share is well above Germany’s 26 percent and far above China’s roughly 15 percent. A new market for electricity is opening up, but slowly: So far, Americans have been reluctant buyers of electric vehicles, and, notoriously, they have long spurned building a network of high-speed electric trains, which every other affluent country has done. This article appears in the June 2022 print issue as “Electricity’s Slow Rollout.”

  • Print an Arduino-Powered Color Mechanical Television
    by Markus Mierse on 25. May 2022. at 15:00

    Before flat screens, before even cathode-ray tubes, people watched television programs at home thanks to the Nipkow disk. Ninety years ago in places like England and Germany, broadcasters transmitted to commercially produced black-and-white electromechanical television sets, such as the Baird Televisor, that used these disks to produce moving images. This early programming established many of the formats we take for granted today, such as variety shows and outside broadcasts. The size and weight of a Nipkow disk makes a display with more than a few dozen scan lines impracticable (in stark contrast to modern screens with thousands of lines). But when a mechanical TV is fed a moving image, the result is surprisingly watchable. And Nipkow displays are fascinating in their simplicity—no high voltages or complex matrices. So I wondered: What was the easiest way to build such a display that would produce a good quality image? I’d been interested in Nipkow disks since I was a student, trying a few experiments with cardboard disks that didn’t really produce anything. In more recent years, I saw that a number of people had built modern Nipkow displays—even incorporating color—but these relied on having access to pricey machine tools and materials. I set about designing an inexpensive version that could be made using a consumer-grade 3D printer. The secret of a Nipkow disk is in its spiral of holes. A light source behind the disk illuminates a small region. A motor spins the disk, and each hole passes through the lighted region in turn, creating a series of (slightly curved) scan lines. If the illumination is varied in sync with the time it takes each hole to cross the viewing region, you can build up images in the display frame. The first thing I had to do was figure out the disk. I chose to make the disk 20 centimeters in diameter, as that’s a size most home 3D printers can manage. This dictated the resolution of my display, since there’s a limit to how small you can produce precisely shaped and positioned holes. I wrote software that allowed me to generate test disks with my Prusa i3 MK3S+ printer, settling on 32 holes for 32 scan lines. The display is a trapezoid, 21.5 millimeters wide and 13.5 mm tall at one end and 18 mm tall at the other. One unexpected benefit of printing a disk with holes, rather than drilling them, was that I could make the holes square, resulting in a much sharper image than with circular holes. A printed 20-centimeter-diameter disk (1), complete with 32 holes, is illuminated by an RGB LED module (2). An Arduino Mega (3) controls the LEDs, while the motor’s (4) speed is adjusted by a potentiometer (5). Images and movies are stored and read from a SD card (6). Digital LED data from the Mega is converted to analog voltage using a custom PCB (7), and the rotation of the disk is monitored with an infrared sensor (8).James Provost For a light source, I used an LED module with red, green, and blue elements placed behind a diffuser. A good picture requires a wide, dynamic range of brightness and color, which means driving each element with more power and precision than a microcontroller can typically provide directly. I designed a 6-bit digital-to-analog converter circuit, and had custom printed circuit boards made, each with two copies of the circuit. I stacked two PCBs on top of each other so that one copy of my DAC drives one LED color (with a spare circuit left over in case I made any mistakes populating the PCBs with components!). This gives a combined resolution of 18 bits per pixel. Three potentiometers let me adjust the brightness of each channel. An Arduino Mega microcontroller provides the brains. The Mega has enough RAM to hold screen frames and enough input/output pins to dedicate an entire port to each color (a port allows you to address up to eight pins simultaneously, using the bits of a single byte to turn each pin on or off). While this did mean effectively wasting two pins per channel, the Mega has pins to spare, and addressing a port provides a considerable speed advantage over bit-banging to address each pin separately. One unexpected benefit of printing a disk was that I could make the holes square, resulting in a much sharper image. The Mega synchronizes its output with the disk’s rotation using an infrared sensor triggered by a reflective strip attached to the back of the disk. Thanks to this sensor, I didn’t have to worry about precisely controlling the speed of the disk. I used a low-noise 12-volt DC XD-3420 motor, which is easily obtained. I connected some additional controls to the Mega—a mode button that switches between photos and videos, a play/pause button, and a skip-track button to advance to the next file. Frames printed in 3D hold everything together, mounted on a wooden base. Because my TV uses 6 bits per channel per pixel instead of the 8 bits used by most modern image formats, I created a conversion tool that is free to download, along with all of the other supporting files for this project from Hackster.io. Videos are treated as a collection of still images sent to the display at a rate of about 25 to 30 frames per second, depending on the exact speed of the disk. You can convert video into suitable collections using the open-source software VirtualDub, and then pass the results through my converter. As each hole moves in front of the LED light source, the brightness of the LED is modulated to create a scan line of the image.James Provost Movies and images are stored on a SD card and read into the Mega’s memory using an SD module via its SPI connection. The TV simply scans the top-level directory and begins displaying images—one at a time in the case of photos, and automatically advancing to the next image in video mode. Initially, when I tried to play movies there was noticeable stuttering thanks to dropped frames. I discovered this was due to the standard Arduino SD library—it can handle data transfers at a rate of only 25 kilobytes per second or so, while at 25 frames per second the display is looking for data at a rate of 75 kB/s. The problem was solved by switching to the optimized SdFat library, which provides much faster read access. The result is a display that’s small enough to fit nicely on a desk or shelf, but produces a bright, colorful, and steady image at a frame rate fast enough for most video. All-electronic television may have ultimately triumphed over mechanical sets, but my spinning Nipkow disk is a visceral reminder that powerful forces can spring from simple origins. This article appears in the June 2022 print issue as “EZ Mechanical TV.”

  • World Builders Put Happy Face On Superintelligent AI
    by Eliza Strickland on 25. May 2022. at 13:00

    One of the biggest challenges in a world-building competition that asked teams to imagine a positive future with superintelligent AI: Make it plausible. The Future of Life Institute, a nonprofit that focuses on existential threats to humanity, organized the contest and is offering a hefty prize purse of up to US $140,000, to be divided among multiple winners. Last week FLI announced the 20 finalists from 144 entries, and the group will declare the winners on 15 June. “We’re not trying to push utopia. We’re just trying to show futures that are not dystopian, so people have something to work toward.”—Anna Yelizarova, Future of Life Institute The contest aims to counter the common dystopian narrative of artificial intelligence that becomes smarter than humans, escapes our control, and makes the world go to hell in one way or another. The philosopher Nick Bostrom famously imagined a factory AI turning all the world’s matter into paper clips to fulfill its objective, and many respected voices in the field, such as computer scientist Stuart Russell, have argued that it’s essential to begin work on AI safety now, before superintelligence is achieved. Add in the sci-fi novels, TV shows, and movies that tell dark tales of AI taking over—the Blade Runners, the Westworlds, the Terminators, the Matrices (both original recipe and Resurrections)—and it’s no wonder the public feels wary of the technology. Anna Yelizarova, who’s managing the contest and other projects at FLI, says she feels bombarded by images of dystopia in the media, and says it makes her wonder “what kind of effect that has on our worldview as a society.” She sees the contest partly as a way to provide hopeful visions of the future. “We’re not trying to push utopia,” she says, noting that the worlds built for the contest are not perfect places with zero conflicts or struggles. “We’re just trying to show futures that are not dystopian, so people have something to work toward,” she says. The contest asked a lot from the teams who entered: They had to provide a timeline of events from now until 2045 that includes the invention of artificial general intelligence (AGI), two “day in the life” short stories, answers to a list of questions, and a media piece reflecting their imagined world. Yelizarova says that another motivation for the contest was to see what sorts of ideas people would come up with. Imagining a hopeful future with AGI is inherently more difficult than imagining a dystopian one, she notes, because it requires coming up with solutions to some of the biggest challenges facing humanity. For example, how to ensure that world governments work together to deploy AGI responsibly and don’t treat its development as an arms race? And how to create AGI agents whose goals are aligned with those of humans? “If people are suggesting new institutions or new ways of tackling problems,” Yelizarova says, “those can become actual policy efforts we can pursue in the real world.” “For a truly positive transformative relationship with AI, it needs to help us—to help humanity—become better.... And the idea that such a world might be possible is a future that I want to fight for.”—Rebecca Rapple, finalist in the Future of Life Institute’s world-building contest It’s worth diving into the worlds created by the 20 finalists and browsing through the positive possible futures. IEEE Spectrum corresponded with two finalists who have very different visions. The first, a solo effort by Rebecca Rapple of Portland, Ore., imagines a world in which an AGI agent named TAI has a direct connection with nearly every human on earth via brain-computer interfaces. The world’s main currency is one of TAI’s devising, called Contribucks, which are earned via positive social contributions and which lose value the longer they’re stored. People routinely plug into a virtual experience called Communitas, which Rapple’s entry describes as “a TAI-facilitated ecstatic group experience where sentience communes, sharing in each other’s experiences directly through TAI.” While TAI is not directly under humans’ control, she has stated that “she loves every soul” and people both trust her and think she’s helping them to live better lives. Rapple, who describes herself as a pragmatic optimist, says that crafting her world was an uplifting process. “The assumption at the core of my world is that for a truly positive transformative relationship with AI, it needs to help us—to help humanity—become better,” she tells Spectrum. “Better to ourselves, our neighbors, our planet. And the idea that such a world might be possible is a future that I want to fight for.” The second team Spectrum corresponded with is a trio from Nairobi, Kenya: Conrad Whitaker, Dexter Findley, and Tracey Kamande. In the world imagined by this team, AGI emerged from a “new non–von Neumann computing paradigm” in which memory is fully integrated into processing. As an AGI agent describes it in one of the team's short stories, AGI has resulted “from the digital replication of human brain structure, with all its separate biological components, neural networks and self-referential loops. Nurtured in a naturalistic setting with constant positive human interaction, just like a biological human infant.” In this world there are over 1,000 AGIs, or digital humans, by the year 2045; the machine learning and neural networks that we know as AI today are widely used for optimization problems, but aren’t considered true, general-purpose intelligence. Those AIs, in so many words, are not AGI. But in the present scenario being imagined, many people live in AGI-organized “digital nations” that they can join regardless of their physical locations, and which bring many health and social benefits. In an email, the Kenyan team says they aimed to paint a picture of a future that is “strong on freedoms and rights for both humans and AGIs—going so far as imagining that a caring and respectful environment that encouraged unbridled creativity and discourse (conjecture and criticism) was critical to bringing an ‘artificial person’ to maturity in the first place.” They imagine that such AGI agents wouldn’t see themselves as separate from humans as they would be “humanlike” in both their experience of knowledge and their sense of self, and that the AGI agents would therefore have a humanlike capacity for moral knowledge. Meaning that these AGI agents would see the problem with turning all humans on earth into paper clips.

  • Startup Makes It Easier to Detect Fires With IoT and Flir Cameras
    by Kathy Pretz on 24. May 2022. at 18:00

    Fires at recycling sorting facilities, ignited by combustible materials in the waste stream, can cause millions of dollars in damage, injuring workers and first responders and contaminating the air. Detecting the blazes early is key to preventing them from getting out of control. Startup MoviTHERM aims to do that. The company sells cloud-based fire-detection monitoring systems to recycling facilities. Using thermal imaging and heat and smoke sensors, the system alerts users—on and off the site—when a fire is about to break out. To date, MoviTHERM’s system operates in five recycling facilities. The company’s founder, IEEE Member Markus Tarin, says the product also can be used in coal stockpiling operations, industrial laundries, scrapyards, and warehouses. In 1999 Tarin launched a consulting business in Irvine, Calif., to do product design and testing, mostly for medical clients. Then thermal-camera manufacturer Flir hired him to write software to automate non-contact temperature measurement processes to give Flir’s customers the ability to act quickly on temperature changes spotted by their camera. “That was the beginning of me going into the thermal-imaging world and applying my knowledge,” he says. “I saw a lot of need out there because there weren’t very many companies doing this sort of thing.” Tarin started MoviTHERM in 2008 to be a distributor and systems integrator for Flir thermal imagers. The company, now with 11 employees and Tarin at the helm, is still in that business. However, Tarin became frustrated by the fact that the software he developed was not scalable; rather, it was tailored to each customer’s specific needs. And he began to discover around 2015 that interest in such custom software had fallen off. “I could no longer easily sell a customized solution,” he says, “because it was perceived as too risky and too expensive.” “We are trying to prevent catastrophic losses and environmental damage.” INTELLIGENT MODULE In 2016 he began developing the MoviTHERM Series Intelligent I/O module (MIO). He targeted it at fire detection in recycling facilities, he says, because “we had a lot of customers reaching out for solutions in that field.” The Ethernet-connected programmable device includes eight digital alarm switches and eight channels of 4- to 20-milliampere outputs that can be used with up to seven Flir cameras. Once the module is connected to a camera, it starts monitoring. MIO can be expanded by adding more modules, Tarin says. MoviTHERM sells the MIOs for Flirs in several versions for different camera models. Each variant supports from one to seven cameras, and they range in price from US $895 to $5,995. MIO allows customers to “just click and connect multiple Flir cameras and set up the alarms without programming any software,” Tarin says. “The intelligent module sits on the network along with the cameras and sounds an alarm if a camera detects a hot spot,” he says. MIO won the 2016 Innovators Award for industry-best product from Vision Systems Design magazine. Tarin says Flir was “so fascinated by MIO that it began distributing the module worldwide.” “It’s the only product the company is distributing that’s not a Flir product,” he says. But, he says, that MIO series lacks the ability to send alerts to customers via voice, text, or email. It can use its built-in digital alarm outputs only to announce an alarm via a connected tool such as a siren or a flashing light. SMARTER FEATURES To add those features and more, Tarin late last year introduced MoviTHERM’s subscription-based iEFD early fire detection system, which can monitor and record a facility’s temperatures throughout the day. The system uses interconnected sensors, instruments, and other tools connected to cloud-based industrial software applications. It can check its own cameras and sensors to make sure they are working. Users can monitor and analyze data via an online dashboard. If a camera detects a hot spot that could potentially develop into a fire, the system can send an alert to the phones of workers near the area to warn them, potentially giving them time to remove an item before it ignites, Tarin says. The system also includes an interactive real-time map view of the facility that can be emailed to firefighters and first responders. The map can include information about the best way to access a facility and the location of utilities at the site, such as water, gas, and electricity. “Firefighters often aren’t familiar with the facility, so they lose valuable time by driving around, figuring out how to enter the facility, and where to go,” Tarin says. “The map shows the best entry point to the facility and where the fire hydrants, water valves, electrical cabinets, gas lines, and so on are located. It also shows them where the fire is. “We recently demonstrated this map to a fire marshal for a recycling facility, and he was blown away by it.” By stopping fires, Tarin says, his system helps prevent toxic emissions from entering the atmosphere. “Once you put the fire out, you have more or less an environmental disaster on your hands because you’re flushing all the hazardous stuff with fire suppressant, which itself might also be hazardous to the environment,” he says. “We are trying to prevent catastrophic losses and environmental damage.”

  • Mayo Clinic Researchers Pump Up Wearable ECG Functions With AI
    by Greg Goth on 24. May 2022. at 15:00

    Mayo Clinic researchers have developed an artificial-intelligence algorithm that can detect weak heart-pump functioning from a single-lead electrocardiogram (ECG) on the Apple Watch. Early results indicate that the ECG is as accurate as a medically ordered treadmill stress test but could be performed anywhere, the researchers say. The single-lead AI algorithm was adapted from an existing algorithm that works by analyzing ventricular pumping data from a 12-lead ECG already in clinical use under an Emergency Use Authorization from the U.S. Food and Drug Administration (FDA). Dr. Paul Friedman, chair of the clinic’s department of cardiovascular medicine, said the new technology could signal a new era for patients for whom making repeated trips to a hospital or cardiologist’s office are inconvenient at the very least and possibly health-threatening. “We wanted to see if it could be done at home, and there are medical reasons why you might want to do that,” Friedman said. “For example, cancer patients receiving chemotherapy usually get echocardiograms every two to three months, but we can’t do it more often because it may damage the heart. “When you think about a 12-lead ECG, you’re lying on a table,” Friedman added. “There’s a trained technologist preparing your skin, making sure you’re still, and that the quality of the data is good. But an ECG on a wearable can happen while you’re slouching on your sofa after dinner. Conditions are different; environments are noisy, and body positions vary. So it took a lot of technical ingenuity to make it fly. But when we tested it, it worked incredibly well.” Averaging out The AI algorithm the Mayo team created in 2019 for the 12-lead ECG used a convolutional neural network to detect subtle signs of ventricular dysfunction by comparing ECG data with labeled data from echocardiograms of the same patients. The researchers trained and validated the network with patient data from 44,959 ECG-echocardiogram pairs and tested it on 52,870 patients. The team found the network was 86 percent accurate in identifying patients with a weak ejection fraction, a symptom denoting cardiac dysfunction. The network's accuracy, Friedman said, was better than that of a mammogram in detecting evidence of cancer in a breast. Itzhak Zachi Attia, the department’s lead AI scientist, said his team hypothesized that they could ascertain whether the watch-based ECG would be accurate enough to signal weak ejection fraction by going back to the 12-lead data set and reconfiguring the network. The new reconfiguration looked at only the data from one lead, which measured electrical voltage over time between the left and right arms. That data is similar to the information generated by the Apple Watch’s ECG, Attia said. The Mayo Clinic team then calculated the average reading from a month’s worth of ECGs using the Apple Watch to yield a better signal-to-noise ratio. Almost 2,500 patients—each with an iPhone, the Mayo Clinic app, and a Series 4 or later Apple Watch— took part in the study. But the investigators used only the data from 420 patients who’d had an echocardiogram in a clinical setting prescribed within 30 days of their ECG data being analyzed. Attia said the test had an area under the curve of 0.88, meaning it was as accurate as or slightly better than a commonly used treadmill stress test. The Mayo Clinic team is now running a worldwide clinical trial of the technology (scheduled to continue through June 2023) for patients of the clinic who fit the cohort parameters. Friedman said the technique can be adapted to any wearable with an ECG function; the vital data resides in existing clinical data sets. “It is absolutely agnostic,” he said. “Other wearable devices that deliver a suitable quality ECG would also be able to support the algorithm.” FDA approves Fitbit A-fib technology Almost contemporaneously with the Mayo findings, the FDA expanded the approved ecosystem of wearable cardiac-monitoring devices: The agency approved Fitbit’s atrial fibrillation (A-fib) detection mechanism. The FDA approved the Fitbit technology in April and classified it as being substantially equivalent to the Apple Watch A-fib technology, which received FDA approval in October 2021. Both technologies rely on photoplethysmograph (PPG) sensors that detect changes in blood flow. While a normal sinus rhythm is recognizable through regularly spaced pulses and consistent morphologies, the PPG algorithm detects possible A-fibs as irregular intervals between pulses and varying pulse morphologies. One of the drawbacks of PPG technology in sensing A-fib, however, is a vulnerability to signal noise caused by factors such as a person’s movement. To reduce chances of noisy A-fib detection, both the Fitbit and Apple PPG technologies are recommended for use when a subject is sitting or lying still. Fitbit lead research scientist Tony Faranesh said the company’s technology is especially effective while a user is asleep. As an over-the-counter technology, it is not intended as a formal diagnostic tool but rather as an early warning signal: “We believe there may be benefit both to patients and the health-care system in catching irregular heart rhythms earlier, before medical events such as strokes occur,” Faranesh said. Friedman said these recent demonstrations of expanded cardiac-monitoring functionality on wearables could serve as vital decision support for frontline physicians who don’t have large data sets or specialized cardiac expertise at their disposal. “Hopefully, this type of technology, if properly deployed, will enhance the capabilities of small-town doctors everywhere by putting some of the skills of a tertiary-care cardiologist in their pockets.”

  • A Dragon Comes Home
    by Willie Jones on 24. May 2022. at 15:00

    The Big Picture features technology through the lens of photographers. Every month, IEEE Spectrum selects the most stunning technology images recently captured by photographers around the world. We choose images that reflect an important advance, or a trend, or that are just mesmerizing to look at. We feature all images on our site, and one also appears on our monthly print edition. Enjoy the latest images, and if you have suggestions, leave a comment below. Figure From Fiction For centuries, people in China have maintained a posture of awe and reverence for dragons. In traditional Chinese culture, the dragon—which symbolizes power, nobility, honor, luck, and success in business—even has a place in the calendar; every twelfth year is a dragon year. Flying, fire-breathing horses covered in lizard scales have been part of legend, lore, and literature since those things first existed. Now, in the age of advanced technology, an engineer has created his own mechatronic version of the mythical beast. François Delarozière, founder and artistic director of French street-performance company La Machine, is shown riding his brainchild, called Long Ma. The 72-tonne steel-and-wood automaton can carry 50 people on a covered terrace built into its back and still walk at speeds of up to 4 kilometers per hour. It will flap its leather-and-canvas-covered wings, and shoot fire, smoke, or steam from its mouth, nose, eyelids, and more than two dozen other vents located along its 25-meter-long body. Long Ma spends most of its time in China, but the mechanical beast has been transported to France so it can participate in fairs there this summer. It has already been featured at the Toulouse International Fair, where it thrilled onlookers from 9 to 18 April. Alain Pitton/NurPhoto/AP Body Area Network Your social media accounts and your credit card information are not the only targets that are in cybercrooks’ crosshairs. Criminals will take advantage of the slightest lapse in the security even of electronic medical devices such as pacemakers, implantable insulin pumps, and neural implants. No one wants to imagine their final experience to be a hostile takeover of their life-saving medical device. So, researchers are brainstorming ideas for foiling cyberattacks on such devices that exploit security weak points in their wireless power or Internet connections. A team at Columbia University, in New York City, has developed a wireless-communication technique for wearable medical devices that sends signals securely through body tissue. Signals are sent from a pair of implanted transmitters to a pair of receivers that are temporarily attached to the device user’s skin. Contrast this with RF communication, where the device is continuously transmitting data waiting for the receiver to catch the signal. With this system, there is no security risk, because there are no unencrypted electromagnetic waves sent out into the air to hack. The tiny transmitter-receiver pair pictured here can communicate through the petal of a flower. Larger versions, say the Columbia researchers, will get signals from transmitters located adjacent to internal organs deep within the body to noninvasive external receivers stuck onto the skin. Dion Khodagholy/Columbia Engineering Sun in a Box Anyone who has ever paid attention to how an incandescent lightbulb works knows that a significant amount of the energy aimed at creating light is lost as heat. The same is true in reverse, when solar panels lose some of the energy in photons as heat instead of it all being converted into electrons. Scientists have been steadily cutting these losses and ramping up the efficiency of photovoltaics, with the aim of bringing them to operational and economic parity with power plants that generate electricity via the spinning of turbines. The most efficient turbine-based generators convert only about 35 percent of the total theoretical energy contained in, say, natural gas into electrical charge, . And until recently, that was enough to keep them head and shoulders above solar cells. But the tide looks to be turning. A thermophotovoltaic (TPV) cell developed by engineers at MIT has eclipsed the 40-percent-efficiency mark. The so-called "Sun in a Box" captures enough light energy that it reaches temperatures above 2,200 °C. At these temperatures, a silicon filament inside the box emits light in the infrared range. Those infrared photons get converted from light to charge instead of more heat, ultimately boosting the device’s overall conversion efficiency. The TPV’s creators and outside observers believe that such devices could operate at 50-percent efficiency at higher temperatures. That, say the MIT researchers, could dramatically lower the cost of electric power, and turn the fossil-fuel- and fission-fired power plants upon which we so heavily rely into quaint anachronisms. “A turbine-based power production system’s cost is usually on the order of [US] $1 per watt. However, for thermophotovoltaics, there is potential to reduce it to the order of 10 cents per watt,” says Asegun Henry, the MIT professor of mechanical engineering who led the team that produced the TPV cell. Felice Frankel One Large Rat, Hold the Droppings Rats are irrepressible. They go where they want, eat what they want, and seem immune to our best efforts to eradicate them and the pathogens they carry. Scientists have now decided that, since we cannot beat them, the smart thing to do is to recruit them for our purposes. But training rodents to carry out our wishes while ignoring their own instinctive drives is not likely to be a successful endeavor. Therefore, researchers are making robotic rats that have real rodents’ physical features but can be remotely controlled. One of the first use cases is in disaster zones, where debris and unstable terrain make it too dangerous for human rescue workers to tread. The robotic rat pictured here is a product of a group of researchers at the Beijing Institute of Technology. They tried other designs, but “large quadruped robots cannot enter narrow spaces, while micro quadruped robots can enter the narrow spaces but face difficulty in performing tasks, owing to their limited ability to carry heavy loads,” says Professor Qing Shi, a member of the team that developed the automaton rodent. They decided to model their machine after the rat because of how adept it is at squeezing into tight spaces and turning on a dime, and its remarkable strength relative to its size. Qing Shi

  • Engineers Are Working on a Solar Microgrid to Outlast Lunar Nights
    by Payal Dhar on 23. May 2022. at 17:36

    The next time humans land on the moon, they intend to stay awhile. For the Artemis program, NASA and its collaborators want to build a sustained presence on the moon, which includes setting up a base where astronauts can live and work. One of the crucial elements for a functioning lunar base is a power supply. Sandia National Laboratories, a research and development lab that specializes in building microgrids for military bases, is teaming up with NASA to design one that will work on the moon. The moon base is expected to be a technological proving ground for humans to venture farther into space—such as voyaging to Mars. Therefore, a power grid will not simply keep the lights on and air pumping but also support the mining and fuel-processing facilities that will concurrently work to reduce the supply requirements from Earth. There are, of course, some differences between designing a microgrid for a moon base and designing a similar setup used on Earth. Notably, it will need to keep astronauts alive, rather than just support a conventional household load, says Rachid Darbali-Zamora, an electrical engineer at Sandia. For that, energy storage and power management will be critical. The lunar habitat will include a living unit as well as a mining and processing center that will produce water, oxygen, rocket fuel, and more. The Sandia engineers therefore are looking at two direct-current microgrids, with a tie-line connecting them. Lee Rashkin, another electrical engineer at Sandia, says that they are working to define the parameters of the tie-line. “We are looking at the voltage being a little bit higher [than the two load centers] because it is going to have to span several kilometers between [them],” he says. It would be easier to push the power through at a higher voltage because it would require less current, making it a little bit easier, he adds. The habitation unit will be about the same size as the International Space Station, the mining and processing center a bit larger, with a distance of around 10 kilometers between them, Rashkin says. While both systems will be designed to be self-sufficient, “the tie-line is there primarily for for redundancy. If something happens to one of the [photovoltaic] generators at the habitat unit, it can import power to maintain those loads, which are critical to keep people alive,” he says. Each system will also have its own redundancy, rerouting, and reconfiguration capabilities, Darbali-Zamora adds. “So if a particular line of the lunar habitat is serving a critical load, and that line goes down, there are mechanisms to reroute power to receive it from a source.” Because the portions of the planned moon base will be spread across the lunar surface, the engineers also expect there will be a lot of power electronics and distributed energy resources. “Power electronics is essentially the electrical equivalent of a gearbox,” Rashkin says. “Like a gearbox converts from one speed and torque value to another, power electronics can convert from one power and voltage level to another.” These power electronic converters will be key in managing the power between the battery or the solar panels and the main bus. The main energy source will be solar, supplemented by batteries. Unlike on Earth, there is no cloud shading on the moon, which means the lunar surface receives more direct sunlight. Darbali-Zamora sees it as an advantage in some ways, but they will have to account for lunar nights, which are approximately two Earth weeks long. There are energy storage options, Darbali-Zamora says, depending on what kind of battery material or chemistry they want. “But the idea is, when there’s more solar generation and load demand, the solar panels will charge the battery,” he says. “Part of the work that we’re doing is defining controls that manage that, ensuring that the batteries aren’t completely depleted and that there’s synergy between the generation from the solar panels, the consumption of the loads, and the charging and discharging of the battery.” To do this, the engineers have controls based on timescales—from units that operate at submillisecond speeds to those that work over days, planning out where the state of charge needs to be at any time. “And one of those constraints on that [latter] level of control is that the energy storage needs to be fully charged by the time the sun is gone,” Rashkin adds. All of the testing and tweaking will take place at Sandia’s Secure Scalable Microgrid Testbed, in Albuquerque. “We have emulation capabilities for all of [the specifications] being planned for the moon base,” Rashkin says. The testbed can be used to build a scaled-down representation of the lunar microgrid, and used to study the power system controllers, energy storage, power electronics, and distributed energy sources. “We’re planning on using that for a lot of our control design analysis.” Once they have the controls, Rashkin’s team will pass it on to Darbali-Zamora’s to test in Sandia’s Distributed Energy Technology Laboratory. With power hardware-in-the-loop capabilities, they can test physical devices, such as the controllers built by Rashkin's team, in simulated environments, like a lunar-base simulator. “We can even simulate two separate systems—for example, the lunar habitat in one emulator and the mining and production in another, and the tie-line or the converters that compose this tie-line,” Darbali-Zamora says. There’s still a way to go before their work ends up on the moon, but both engineers point out that the work they are doing is not completely decoupled from what they do terrestrially. “We are hoping that a lot of the solutions that we find in this project can be implemented here on Earth” to build better and more resilient systems, says Darbali-Zamora.

  • Practical Power Beaming Gets Real
    by Paul Jaffe on 21. May 2022. at 15:00

    Wires have a lot going for them when it comes to moving electric power around, but they have their drawbacks too. Who, after all, hasn’t tired of having to plug in and unplug their phone and other rechargeable gizmos? It’s a nuisance. Wires also challenge electric utilities: These companies must take pains to boost the voltage they apply to their transmission cables to very high values to avoid dissipating most of the power along the way. And when it comes to powering public transportation, including electric trains and trams, wires need to be used in tandem with rolling or sliding contacts, which are troublesome to maintain, can spark, and in some settings will generate problematic contaminants. Many people are hungry for solutions to these issues—witness the widespread adoption over the past decade of wireless charging, mostly for portable consumer electronics but also for vehicles. While a wireless charger saves you from having to connect and disconnect cables repeatedly, the distance over which energy can be delivered this way is quite short. Indeed, it’s hard to recharge or power a device when the air gap is just a few centimeters, much less a few meters. Is there really no practical way to send power over greater distances without wires? To some, the whole notion of wireless power transmission evokes images of Nikola Tesla with high-voltage coils spewing miniature bolts of lightning. This wouldn’t be such a silly connection to make. Tesla had indeed pursued the idea of somehow using the ground and atmosphere as a conduit for long-distance power transmission, a plan that went nowhere. But his dream of sending electric power over great distances without wires has persisted. To underscore how safe the system was, the host of the BBC science program “Bang Goes the Theory” stuck his face fully into a power beam. Guglielmo Marconi, who was Tesla’s contemporary, figured out how to use “Hertzian waves,” or electromagnetic waves, as we call them today, to send signals over long distances. And that advance brought with it the possibility of using the same kind of waves to carry energy from one place to another. This is, after all, how all the energy stored in wood, coal, oil, and natural gas originally got here: It was transmitted 150 million kilometers through space as electromagnetic waves—sunlight—most of it millions of years ago. Can the same basic physics be harnessed to replace wires today? My colleagues and I at the U.S. Naval Research Laboratory, in Washington, D.C., think so, and here are some of the reasons why. There have been sporadic efforts over the past century to use electromagnetic waves as a means of wireless power transmission, but these attempts produced mixed results. Perhaps the golden year for research on wireless power transmission was 1975, when William Brown, who worked for Raytheon, and Richard Dickinson of NASA’s Jet Propulsion Laboratory (now retired) used microwaves to beam power across a lab with greater than 50 percent end-to-end efficiency. In a separate demonstration, they were able to deliver more than 30 kilowatts over a distance of about a mile (1.6 kilometers). These demonstrations were part of a larger NASA and U.S. Department of Energy campaign to explore the feasibility of solar-power satellites, which, it was proposed, would one day harvest sunlight in space and beam the energy down to Earth as microwaves. But because this line of research was motivated in large part by the energy crisis of the 1970s, interest in solar-power satellites waned in the following decades, at least in the United States. Although researchers revisit the idea of solar-power satellites with some regularity, those performing actual demonstrations of power beaming have struggled to surpass the high-water mark for efficiency, distance, and power level reached in 1975. But that situation is starting to change, thanks to various recent advances in transmission and reception technologies. During a 2019 demonstration at the Naval Surface Warfare Center in Bethesda, Md., this laser beam safely conveyed 400 watts over a distance of 325 meters.U.S. Naval Research Laboratory Most early efforts to beam power were confined to microwave frequencies, the same part of the electromagnetic spectrum that today teems with Wi-Fi, Bluetooth, and various other wireless signals. That choice was, in part, driven by the simple fact that efficient microwave transmitting and receiving equipment was readily available. But there have been improvements in efficiency and increased availability of devices that operate at much higher frequencies. Because of limitations imposed by the atmosphere on the effective transmission of energy within certain sections of the electromagnetic spectrum, researchers have focused on microwave, millimeter-wave, and optical frequencies. While microwave frequencies have a slight edge when it comes to efficiency, they require larger antennas. So, for many applications, millimeter-wave or optical links work better. For systems that use microwaves and millimeter waves, the transmitters typically employ solid-state electronic amplifiers and phased-array, parabolic, or metamaterial antennas. The receiver for microwaves or millimeter waves uses an array of elements called rectennas. This word, a portmanteau of rectifier and antenna, reflects how each element converts the electromagnetic waves into direct-current electricity. Any system designed for optical power transmission would likely use a laser—one with a tightly confined beam, such as a fiber laser. The receivers for optical power transmission are specialized photovoltaic cells designed to convert a single wavelength of light into electric power with very high efficiency. Indeed, efficiencies can exceed 70 percent, more than double that of a typical solar cell. At the U.S. Naval Research Laboratory, we have spent the better part of the past 15 years looking into different options for power beaming and investigating potential applications. These include extending the flight times and payload capacities of drones, powering satellites in orbit when they are in darkness, powering rovers operating in permanently shadowed regions of the moon, sending energy to Earth’s surface from space, and distributing energy to troops on the battlefield. You might think that a device for sending large amounts of energy through the air in a narrow beam sounds like a death ray. This gets to the heart of a critical consideration: power density. Different power densities are technically possible, ranging from too low to be useful to high enough to be dangerous. But it’s also possible to find a happy medium between these two extremes. And there are also clever ways to permit beams with high power densities to be used safely. That’s exactly what a team I was part of did in 2019, and we’ve successfully extended this work since then. One of our industry partners, PowerLight Technologies, formerly known as LaserMotive, has been developing laser-based power-beaming systems for more than a decade. Renowned for winning the NASA Power Beaming Challenge in 2009, this company has not only achieved success in powering robotic tether climbers, quadcopters, and fixed-wing drones, but it has also delved deeply into the challenges of safely beaming power with lasers. That’s key, because many research groups have demonstrated laser power beaming over the years—including teams at the Naval Research Laboratory, Kindai University, the Beijing Institute of Technology, the University of Colorado Boulder, JAXA, Airbus, and others—but only a few have accomplished it in a fashion that is truly safe under every plausible circumstance. There have been many demonstrations of power beaming over the years, using either microwaves [blue] or lasers [red], with the peak-power record having been set in 1975 [top]. In 2021, the author and his colleagues took second and third place for the peak-power level achieved in such experiments, having beamed more than a kilowatt over distances that exceeded a kilometer, using much smaller antennas. David Schneider Perhaps the most dramatic demonstration of safe laser power beaming prior to our team’s effort was by the company Lighthouse Dev in 2012. To underscore how safe the system was, the host of the BBC science program “Bang Goes the Theory” stuck his face fully into a power beam sent between buildings at the University of Maryland. This particular demonstration took advantage of the fact that some infrared wavelengths are an order of magnitude safer for your eyes than other parts of the infrared spectrum. That strategy works for relatively low-power systems. But as you push the level higher, you soon get to power densities that raise safety concerns regardless of the wavelength used. What then? Here’s where the system we’ve demonstrated sets itself apart. While sending more than 400 watts over a distance that exceeded 300 meters, the beam was contained within a virtual enclosure, one that could sense an object impinging on it and trigger the equipment to cut power to the main beam before any damage was done. Other testing has shown how transmission distances can exceed a kilometer. Careful testing (for which no BBC science-program hosts were used) verified to our satisfaction the functionality of this feature, which also passed muster with the Navy’s Laser Safety Review Board. During the course of our demonstration, the system further proved itself when, on several occasions, birds flew toward the beam, shutting it off—but only momentarily. You see, the system monitors the volume the beam occupies, along with its immediate surroundings, allowing the power link to automatically reestablish itself when the path is once again clear. Think of it as a more sophisticated version of a garage-door safety sensor, where the interruption of a guard beam triggers the motor driving the door to shut off. The 400 watts we were able to transmit was, admittedly, not a huge amount, but it was sufficient to brew us some coffee. For our demonstrations, observers in attendance were able to walk around between the transmitter and receiver without needing to wear laser-safety eyewear or take any other precautions. That’s because, in addition to designing the system so that it can shut itself down automatically, we took care to consider the possible effects of reflections from the receiver or the scattering of light from particles suspended in the air along the path of the beam. Last year, the author and his colleagues carried out a demonstration at the U.S. Army’s Blossom Point test facility south of Washington, D.C. They used 9.7-gigahertz microwaves to send 1,649 watts (peak power) from a transmitter outfitted with a 5.4-meter diameter parabolic dish [top] over a distance of 1,046 meters to a 2-by-2-meter “rectenna” [middle] mounted on a tower [bottom], which transformed the beam into usable electric power.U.S. Naval Research Laboratory The 400 watts we were able to transmit was, admittedly, not a huge amount, but it was sufficient to brew us some coffee, continuing what’s become de rigueur in this line of experimentation: making a hot beverage. (The Japanese researchers who started this tradition in 2015 prepared themselves some tea.) Our next goal is to apply power beaming, with fully integrated safety measures, to mobile platforms. For that, we expect to increase the distance covered and the amount of power delivered. But we’re not alone: Other governments, established companies, and startups around the world are working to develop their own power-beaming systems. Japan has long been a leader in microwave and laser power beaming, and China has closed the gap if not pulled ahead, as has South Korea. At the consumer-electronics level, there are many players: Powercast, Ossia, Energous, GuRu, and Wi-Charge among them. And the multinational technology giant Huawei expects power beaming for smartphone charging within “two or three [phone] generations.” For industrial applications, companies like Reach Labs, TransferFi, MH GoPower, and MetaPower are making headway in employing power beaming to solve the thorny problem of keeping batteries for robots and sensors, in warehouses and elsewhere, topped off and ready to go. At the grid level, Emrod and others are attempting to scale power beaming to new heights. On the R&D front, our team demonstrated within the past year safe microwave wireless power transmission of 1.6 kilowatts over a distance of a kilometer. Companies like II-VI Aerospace & Defense, Peraton Labs, Lighthouse Dev, and others have also recently made impressive strides. Today, ambitious startups like Solar Space Technologies, Solaren, Virtus Solis, and others operating in stealth mode are working hard to be the first to achieve practical power beaming from space to Earth. As such companies establish proven track records for safety and make compelling arguments for the utility of their systems, we are likely to see whole new architectures emerge for sending power from place to place. Imagine drones that can fly for indefinite periods and electrical devices that never need to be plugged in—ever—and being able to provide people anywhere in the world with energy when hurricanes or other natural disasters ravage the local power grid. Reducing the need to transport fuel, batteries, or other forms of stored energy will have far-reaching consequences. It’s not the only option when you can’t string wires, but my colleagues and I expect, within the set of possible technologies for providing electricity to far-flung spots, that power beaming will, quite literally, shine. This article appears in the June 2022 print issue as “Spooky Power at a Distance.”

  • Video Friday: Drone in a Cage
    by Evan Ackerman on 20. May 2022. at 20:39

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ICRA 2022: 23 May–27 May 2022, PHILADELPHIA IEEE ARSO 2022: 28 May–30 May 2022, LONG BEACH, CALIF. RSS 2022: 21 June–1 July 2022, NEW YORK CITY ERF 2022: 28 June–30 June 2022, ROTTERDAM, NETHERLANDS RoboCup 2022: 11 July–17 July 2022, BANGKOK IEEE CASE 2022: 20 August–24 August 2022, MEXICO CITY CLAWAR 2022: 12 September–14 September 2022, AZORES, PORTUGAL Enjoy today’s videos! After four years of development, Flyability has announced the Elios 3, which you are more than welcome to smash into anything you like. “The Elios 3 is the single biggest project that Flyability has ever undertaken,” said Adrien Briod, CTO of Flyability. “If you think of the Elios 2 as your classic flip phone, only designed to make phone calls, the Elios 3 is the smartphone. It’s made to be customized for the specific demands of each user, letting you attach the payload you need so you can use the tool as you like, and allowing it to grow and improve over time with new payloads or software solutions.” [ Flyability ] We get that Digit is good at walking under things, but if Agility wants to make the robot more relatable, it should program Digit to bump its head like 5 percent of the time. We all do it. [ Agility ] Skybrush is a drone-show management platform that’s now open source, and if drone shows aren’t your thing, it’s also good for coordinating multiple drones in any other way you want. Or you can make drone shows your thing! We share Skybrush because we are proud of it, and because we envision a growing community around it, consisting of enthusiastic and motivated experts and users all around the world who can join our mission to create something great for the future. The drone industry is evolving at light speed, our team alone is too small yet to keep pace with it. But we have a core that is rock solid and we know for sure that great things can be built on top of it. [ Skybrush ] This happened back in the fall of 2021, but it’s still cool seeing the full video of a Gremlin launch, flight, and capture sequence. [ Dynetics ] NASA’s InSight lander touched down in the Elysium Planitia region of Mars in November of 2018. During its time on the Red Planet, InSight has achieved all its primary science goals and continues to hunt for quakes on Mars. [ Insight ] This kite-powered drone is blowing my mind. [ Kite Propulsion ] A friendly reminder that Tertill is anxious to massacre the weeds in your garden. [ Tertill ] I am not a fan of this ElliQ commercial. [ ElliQ ] We are excited to announce that the 2022 edition of the Swiss Drone Days will take place on 11–12 June in Dubendorf/Zurich. The event will feature live demos including autonomous drone racing...in one of the largest drone flying arenas in the world, spectacular drone races by the Swiss drone league, presentations of distinguished speakers, [and] an exhibition and trade fair. [ Drone Days ] Enjoy 8 minutes of fast-paced, extremely dramatic, absolutely mind-blowing robot football highlights. [ RoboCup ] This week’s GRASP on Robotics seminar is from Katherine Kuchenbecker at the Max Planck Institute for Intelligent Systems, on haptics and physical human-robot interaction. “A haptic interface is a mechatronic system that modulates the physical interaction between a human and their tangible surroundings. Such systems typically take the form of grounded kinesthetic devices, ungrounded wearable devices, or surface devices, and they enable the user to act on and feel a remote or virtual environment. I will elucidate key approaches to creating effective haptic interfaces by showcasing several systems my team created and evaluated over the years.” [ UPenn ] This Lockheed Martin Robotics Seminar is from Xuesu Xiao from The Everyday Robot Project at X, on Deployable Robots that Learn. “While many robots are currently deployable in factories, warehouses, and homes, their autonomous deployment requires either the deployment environments to be highly controlled, or the deployment to only entail executing one single preprogrammed task. These deployable robots do not learn to address changes and to improve performance. For uncontrolled environments and for novel tasks, current robots must seek help from highly skilled robot operators for teleoperated (not autonomous) deployment. In this talk, I will present three approaches to removing these limitations by learning to enable autonomous deployment in the context of mobile robot navigation, a common core capability for deployable robots. Building on robust autonomous navigation, I will discuss my vision toward a hardened, reliable, and resilient robot fleet which is also task-efficient and continually learns from each other and from humans.” [ UMD ]

  • Remembering 1982 IEEE President Robert Larson
    by Joanna Goodrich on 20. May 2022. at 18:00

    Robert E. Larson, 1982 IEEE president, died on 10 March at the age of 83. An active volunteer who held many high-level positions throughout the organization, Larson was the 1975–1976 president of the IEEE Control Systems Society and also served as IEEE Foundation president. Larson worked as a power engineer for Hughes Aircraft, IBM, the Stanford Research Institute (now SRI International), and other companies. He helped to found Systems Control, a computer system designer and manufacturer in Palo Alto, Calif., and he was its chief executive for almost 15 years. He also volunteered with IEEE Smart Village, a program that brings electricity—as well as educational and employment opportunities—to remote communities. Smart Village cofounder IEEE Life Fellow Ray Larsen says Larson rarely missed the program’s biweekly meetings. “He and his wife, Sue, became generous donors. Bob and I often had lunch, where I updated him on our latest challenges,” Larsen says. “It was a great honor to benefit from his deep wisdom, constant support, and friendship.” CHOOSING ENGINEERING Larson was born in Stockton, Calif., where his father was a physics professor at the University of the Pacific. In 1942 his father was recruited to work on the Manhattan Project, so the family moved to Oak Ridge, Tenn., where the plutonium and the uranium enrichment plants were located. “Oak Ridge was a very scientifically oriented community,” especially during World War II, Larson said in a 2009 oral history conducted by the IEEE History Center. “Therefore, I was slated to go into science in some respect. My father’s preference was that I would become a medical doctor, but I got interested in computers at an early age. I built computers when I was in high school using telephone relays and things of that sort.” He earned a bachelor’s degree in electrical engineering in 1960 from MIT. While pursuing his degree, he worked at IBM on its first transistorized supercomputer: IBM 7030, known as Stretch. The computer’s development led to software and hardware such as multiprogramming, memory protection, and CPUs to be incorporated in IBM’s line of computers. Larson moved back to California to continue his education in “warmer weather,” according to his oral history. He received a master’s degree in EE from Stanford in 1961, then continued at the school as a doctoral student. He conducted his thesis research at Hughes Aircraft, where he designed computers for spacecraft. After graduating in 1964, he joined SRI, where he worked on ballistic missile defense and electric power systems. While there, he developed tracking technology for missile reentry vehicles. He also designed technology for an air defense system that could remotely shoot down enemy missiles. He left SRI after four years and, along with several coworkers, founded Systems Control. The company was sold to British Petroleum in 1982. From 1983 to 2012, Larson served as a general partner and technical advisor to the Woodside Fund, a venture-capital firm in Redwood City, Calif. He was a consulting professor in the engineering-economics systems department at Stanford from 1973 to 1988. Larson was the founding president of the U.S.-China Green Energy Council in 2008. The nonprofit, based in Silicon Valley, promotes collaboration between the two countries to help develop technology to combat climate change. “Larson’s contribution in the U.S.-China collaboration was priceless,” the organization’s leaders wrote on its website. “He was a role model to not only his peers but also to the next generation. His voice and smile will always remain in our hearts.” AN ACTIVE VOLUNTEER He joined the Institute of Radio Engineers, one of IEEE’s predecessor societies, in 1958 as a student member at the suggestion of his father. Larson told the History Center that his father explained to him that if he was serious about working with computers, he should “join an organization that will give you information and people you can talk to and network with.” He was honored with the 1968 Outstanding Young Electrical Engineer Award from IEEE-Eta Kappa Nu, IEEE’s honor society. Larson began volunteering in 1968 as an editorial board member of IEEE Transactions on Automatic Control. He went on to become the editor and served for nearly five years. He then served on the IEEE Control Systems Society’s administration committee and became the society’s 1975 president. He was 1978 Division I director, and vice president, Technical Activities. He was elected as IEEE president in 1982 and also served as IEEE Foundation president. Larson was a member of the IEEE Heritage Circle—a cumulative giving donor recognition group. He pledged more than US $10,000 to support IEEE programs such as the History Center and Smart Village. His family made a donation in his memory to Smart Village through the IEEE Foundation. The family has invited others to make donations in his name. To share your condolences or memories of Robert Larson, please use the commenting form below.

  • Acer Goes Big on Glasses-Free, 3D Monitors—Look Out, VR
    by Matthew S. Smith on 20. May 2022. at 14:54

    Acer, the world’s fifth largest PC brand, wants to take the growing AR/VR market by the horns with its SpatialLabs glasses-free stereoscopic 3D displays. First teased in 2021 in a variant of Acer’s ConceptD 7 laptop, the technology expands this summer in a pair of portable monitors, the SpatialLabs View and View Pro, and select Acer Predator gaming laptops. The launch is paired with artificial-intelligence-powered software for converting existing 2D content into stereoscopic 3D. “We see a convergence of virtual and reality,” Jane Hsu, head of business Development for SpatialLabs, said in an interview. “It’s a different form for users to start interacting with a virtual world.” Glasses-free stereoscopic 3D isn’t new. Evolutionary, not revolutionary The technology has powered several niche products and prototypes, such as Sony’s Spatial Reality Display, but its most famous debut was Nintendo’s 3DS portable game console. The 3DS filtered two images through a display layer called a parallax barrier. This barrier controlled the angle an image reached the user’s eyes to create the 3D effect. Because angle was important, the 3DS used cameras that detected the user’s eyes and adjusted the image to compensate for viewing angle. “The PC in 2022 is encountering a lot of problems.”—Jerry Kao, Acer Acer’s technology is similar. It also displays two images which are filtered through an “optical layer” and has cameras to track and compensate for the user’s viewing angle. So, what’s different this time? “The fundamental difference is that the computing power is way different, and resolution is way different,” said Hsu. “The Nintendo, that was 800 by 240. In a sense, the technology is the same, but over time it has improved for a crystal-clear, high-resolution experience.” Resolution is important to this form of glasses-free 3D. Because it renders two images to create the 3D effect, the resolution of the display is cut in half on the horizontal axis when 3D is on. The 3DS cut resolution to 400 by 240 when 3D was on and blurry visuals were a common complaint among critics. Acer’s SpatialLabs laptops and displays are a big improvement. Each provides native 4K (3,840 by 2,160 resolution) in 2D. That’s 43 times the pixel count of Nintendo’s 3DS. Turning 3D on shaves resolution to 1,920 by 2,160, which, while lower, is still sharper than that of a 27-inch 4K monitor. Hsu says advancements in AI compute are also key. Partners like Nvidia and Intel can now accelerate AI in hardware, a feature that wasn’t common a half decade ago. Acer has harnessed this for SpatialLabs GO, a software utility that can convert full-screen content from 2D to stereoscopic 3D. This should make SpatialLabs useful with a wider range of content. It can also help creators generate content for use in stereoscopic 3D by importing and converting existing assets. A new angle on augmented reality Acer was a lead partner in Microsoft’s push for mixed-reality headsets. They were a flop, and their failure taught Acer hard lessons about how people approach AR/VR hardware in the real world. “Acer spent a lot bringing VR headsets to market, but...it was not very successful,” Acer Co-COO Jerry Kao said in an interview. “There were limitations. It’s not comfortable, or it’s expensive, and you need space around you. So, we wanted to address this.” SpatialLabs is a complementary alternative. Creators can use Spatial Labs to achieve a 3D effect in their home office without pushing aside furniture. The Acer View Pro, meant for commercial use, may have a future in retail displays, a use that headsets can't address. The View Pro display is built for use in kiosks and retail displays.Acer Most of the SpatialLabs product line, including the ConceptD 7 laptop and View displays, lean toward creative professionals using programs like Maya and Blender to create 3D content. Acer says its software suite has “out-of-the-box support for all major file formats.” It recently added support for Datasmith, a plug-in used to import assets to Epic’s Unreal Engine. But the technology is also coming to Predator gaming laptops for glasses-free stereoscopic 3D in select titles like Forza Horizon 5 and The Witcher 3: Wild Hunt. Gaming seems a natural fit given its history in Nintendo’s handheld, and Hsu thinks it will help attract mainstream attention. “When the Turn 10 team [developer of the Forza Horizon series] saw what we had done with Forza Horizon 5, they were like, ‘Wow, this is so great!’ ” said Hsu. “They said, ‘You know what? I think I can build the scene with even more depth.’ And this is just the beginning.” Does glasses-free 3D really stand a chance? SpatialLabs brings gains in resolution and performance, but it’s far from a surefire hit. Acer is the only PC maker currently pursuing the hardware. Going it alone won’t be easy. “While the tech seems quite appealing, it will likely remain a niche product that’ll be used in rare instances by designers or developers rather than the average consumer,” Jitesh Ubrani, research manager at IDC, said in an email. He thinks Acer could find it difficult to deliver on price and availability, “both of which are tough to do for such a fringe technology.” I asked Hsu how Acer will solve these issues. “In a way he’s right, it is difficult. We’re building this ourselves,” said Hsu. “But also, the hardware is more mature.” Kao chimed in to say SpatialLabs will stand out in what might be weak year for home computers. “The PC in 2022 is encountering a lot of problems,” Kao said. He sees that as a motivation, not a barrier, for novel technology on the PC. “Intel, Google, Microsoft, and a lot of people, they have technology,” said Kao. “But they don’t know how to leverage that technology in the product and deliver the experience to specific people. That is what Acer is good at.”

  • DARPA Wants a Better, Badder Caspian Sea Monster
    by Evan Ackerman on 19. May 2022. at 19:31

    Arguably, the primary job of any military organization is moving enormous amounts of stuff from one place to another as quickly and efficiently as possible. Some of that stuff is weaponry, but the vast majority are things that support that weaponry—fuel, spare parts, personnel, and so on. At the moment, the U.S. military has two options when it comes to transporting large amounts of payload. Option one is boats (a sealift), which are efficient, but also slow and require ports. Option two is planes (an airlift), which are faster by a couple of orders of magnitude, but also expensive and require runways. To solve this, the Defense Advanced Research Projects Agency (DARPA) wants to combine traditional sealift and airlift with the Liberty Lifter program, which aims to “design, build, and flight test an affordable, innovative, and disruptive seaplane” that “enables efficient theater-range transport of large payloads at speeds far exceeding existing sea lift platforms.” DARPA DARPA is asking for a design like this to take advantage of ground effect, which occurs when an aircraft’s wing deflects air downward and proximity to the ground generates a cushioning effect due to the compression of air between the bottom of the wing and the ground. This boosts lift and lowers drag to yield a substantial overall improvement in efficiency. Ground effect works on both water and land, but you can take advantage of it for only so long on land before your aircraft runs into something. Which is why oceans are the ideal place for these aircraft—or ships, depending on your perspective. During the late 1980s, the Soviets (and later the Russians) leveraged ground effect in the design of a handful of awesomely bizarre ships and aircraft. There’s the VVA-14, which was also an airplane, along with the vehicle shown in DARPA’s video above, the Lun-class ekranoplan, which operated until the late 1990s. The video clip really does not do this thing justice, so here’s a better picture, taken a couple of years ago: Instagram The Lun (only one was ever made) had a wingspan of 44 meters and was powered by eight turbojet engines. It flew about 4 meters above the water at speeds of up to 550 kilometers per hour, and could transport almost 100,000 kilograms of cargo for 2,000 km. It was based on an earlier, even larger prototype (the largest aircraft in the world at the time) that the CIA spotted in satellite images in 1967 and which seems to have seriously freaked them out. It was nicknamed the Caspian Sea Monster, and it wasn’t until the 1980s that the West understood what it was and how it worked. In the mid 1990s, DARPA itself took a serious look at a stupendously large ground-effect vehicle of its own, the Aerocon Dash 1.6 wingship. The concept image below is of a 4.5-million-kg vehicle, 175 meters long with a 100-meter wingspan, powered by 20 (!) jet engines: Wikipedia With a range of almost 20,000 km at over 700 km/h, the wingship could have carried 3,000 passengers or 1.4 million kg of cargo. By 1994, though, DARPA had decided that the potential billion-dollar project to build a wingship like this was too risky, and canceled the whole thing. Less than 10 years later, Boeing’s Phantom Works started exploring an enormous ground-effect aircraft, the Pelican Ultra Large Transport Aircraft. The Pelican would have been even larger than the Aerocon wingship, with a wingspan of 152 meters and a payload of 1.2 million kg—that’s about 178 shipping containers’ worth. Unlike the wingship, the Pelican would take advantage of ground effect to boost efficiency only in transit above water, but would otherwise use runways like a normal aircraft and be able to reach flight altitudes of 7,500 meters. Operating as a traditional aircraft and with an optimal payload, the Pelican would have a range of about 12,000 km. In ground effect, however, the range would have increased to 18,500 km, illustrating the appeal of designs like these. But Boeing dropped the project in 2005 to focus on lower cost, less risky options. We’d be remiss if we didn’t at least briefly mention two other massive aircraft: the H-4 Hercules, the cargo seaplane built by Hughes Aircraft Co. in the 1940s, and the Stratolaunch carrier aircraft, which features a twin-fuselage configuration that DARPA seems to be favoring in its concept video for some reason. From the sound of DARPA’s announcement, they’re looking for something a bit more like the Pelican than the Aerocon Dash or the Lun. DARPA wants the Liberty Lifter to be able to sustain flight out of ground effect if necessary, although it’s expected to spend most of its time over water for efficiency. It won’t use runways on land at all, though, and should be able to stay out on the water for 4 to 6 weeks at a time, operating even in rough seas—a significant challenge for ground-effect aircraft. DARPA is looking for an operational range of 7,500 km, with a maximum payload of at least 90,000 kg, including the ability to launch and recover amphibious vehicles. The hardest thing DARPA is asking for could be that, unlike most other X-planes, the Liberty Lifter should incorporate a “low cost design and construction philosophy” inspired by the mass-produced Liberty ships of World War II. With US $15 million to be awarded to up to two Liberty Lifter concepts, DARPA is hoping that at least one of those concepts will pass a system-level critical design review in 2025. If everything goes well after that, the first flight of a full-scale prototype vehicle could happen as early as 2027.

  • IEEE Spectrum Wins Six Neal Awards
    by Ann Townley on 18. May 2022. at 18:00

    IEEE Spectrum garnered top honors at this year’s annual Jesse H. Neal Awards ceremony, held on 26 April. Known as the “Pulitzer Prizes” of business-to-business journalism, the Neal Awards recognize editorial excellence. The awards are given by the SIIA (Software and Information Industry Association). For the fifth year in a row, IEEE Spectrum was awarded the Best Media Brand. The award is given for overall editorial excellence. IEEE Spectrum also received these awards: Best Website. Best Art Direction for a Cover: April 2021, “The Ultimate Incubator” (Senior Art Director Mark Montgomery, Photography Director Randi Klett, and the photography team of The Voorhes). Best Art Direction for a Single Article: April 2021 “The Ultimate Incubator” (Deputy Art Director Brandon Palacio, Photography Director Randi Klett, the photography team of The Voorhes, and illustrator Chris Philpot). Best Range of Work by a Single Author (Senior Digital Editor Evan Ackerman). Best Commentary: Hands On (Senior Editor Project Manager Stephen Cass, Senior Editor David Schneider, and illustrator James Provost). Best Media Brand “The talented, dedicated team that produces the world’s best tech magazine day in and day out deserved to win Best Media Brand for the fifth year running,” says Harry Goldstein, IEEE Spectrum’s acting editor in chief. “We’re also delighted that Evan Ackerman’s astounding body of work on robotics earned an award, along with our Hands On column, written and curated by Stephen Cass and David Schneider, assisted by online art director Erik Vrielink. “And speaking of art direction,” Goldstein says, “our other two Neals, for Best Cover and Best Single Article Treatment, came through the efforts of our staff including Brandon Palacio, Randi Klett, and Mark Montgomery.”

  • Royal Mail Is Doing the Right Thing With Drone Delivery
    by Evan Ackerman on 17. May 2022. at 18:17

    Eight-ish years ago, back when drone delivery was more hype than airborne reality (even more so than it is now), DHL tested a fully autonomous delivery service that relied on drones to deliver packages to an island 12 kilometers off Germany’s North Sea coast. The other alternative for getting parcels to the island was a ferry. But because the ferry didn’t run every day, the drones filled the scheduling gaps so residents of the island could get important packages without having to wait. “To the extent that it is technically feasible and economically sensible,” DHL said at the time, “the use of [drones] to deliver urgently needed goods to thinly populated or remote areas or in emergencies is an interesting option for the future.” We’ve seen Zipline have success with this approach; now, drones are becoming affordable and reliable enough that they’re starting to make sense for use cases that are slightly less urgent than blood and medication deliveries. Now, thinly populated or remote areas can benefit from drones even if they aren’t having an emergency. Case in point: The United Kingdom’s Royal Mail has announced plans to establish more than 50 new postal drone routes over the next three years. The drones themselves come from Windracers Group, and they’re beefy, able to carry a payload of 100 kilograms up to 1,000 km with full autonomy. Pretty much everything on it ensures redundancy: a pair of engines, six separate control units, and backups for the avionics, communications, and ground control. Here’s an overview of a pilot (pilotless?) project from last year: Subject to CAA approval and the ongoing planned improvement in UAV economics, Royal Mail is aiming to secure more than 50 drone routes supported by up to 200 drones over the next three years. Island communities across the Isles of Scilly, Shetland Islands, Orkney Islands, and the Hebrides would be the first to benefit. Longer term, the ambition is to deploy a fleet of more than 500 drones servicing all corners of the U.K. “Corners” is the operative word here, and it’s being used more exclusively than inclusively—these islands are particularly inconvenient to get to, and drones really are the best way of getting regular, reliable mail delivery to these outposts in a cost-effective way. Other options are infrequent boats or even more infrequent large piloted aircraft. But when you consider the horrific relative expense of those modes of transportation, it’s hard for drones not to be cast in a favorable light. And when you want frequent service to a location such as Fair Isle, as shown in the video below, a drone is not only your best bet but also your only reasonable one—it flew 105 km in 40 minutes, fighting strong winds much of the way: There’s still some work to be done to gain the approval of the U.K.’s Civil Aviation Authority. At this point, figuring out those airspace protections and safety regulations and all that stuff is likely more of an obstacle than the technical challenges that remain. But personally, I’m much more optimistic about use cases like the one Royal Mail is proposing here that I am about drone delivery of tacos or whatever to suburbanites, because the latter seems very much like a luxury, while the former is an essential service.

  • Simple, Cheap, and Portable: A Filter-Free Desalination System for a Thirsty World
    by Payal Dhar on 17. May 2022. at 17:08

    MIT researchers have developed a prototype of a suitcase-size device that can turn seawater into safe drinking water. According to the International Desalination Association, more than 300 million people around the world now get their drinking water from the sea. With climate change exacerbating water scarcity globally, seawater desalination is stepping in to fill the void. But whereas commercial desalination plants are designed to meet large-scale demand, there is also a need for portable systems that can be carried into remote regions or set up as stand-ins for municipal water works in the wake of a disaster. A group of scientists from MIT has developed just such a portable desalination unit; it’s the size of a medium suitcase and weighs less than 10 kilograms. The unit’s one-button operation requires no technical knowledge. What’s more, it has a completely filter-free design. Unlike existing portable desalination systems based on reverse osmosis, the MIT team’s prototype does not need any high-pressure pumping or maintenance by technicians. The MIT researchers described their invention in a paper titled “Portable Seawater Desalination System for Generating Drinkable Water in Remote Locations.” The paper was posted in the 14 April online edition of Environmental Science & Technology, a publication of the American Chemical Society. The unit uses produces 0.3 liters of potable drinking water per hour, while consuming a minuscule 9 watt-hours of energy. Plant-scale reverse-osmosis water-treatment operations may be three to four times as energy efficient, and yield far greater quantities of freshwater at much faster rates, but the researchers say the trade-off in terms of weight and size makes their invention the first and only entrant in a new desalination niche. The most notable feature of the unit is its unfiltered design. A filter is a barrier that catches the impurities you don’t want in your water, explains Jongyoon Han, an electrical and biological engineer, and lead author of the study. “We don’t have that specifically because it always tends to clog, and [then] you need to replace it.” This makes traditional portable systems challenging for laypeople to use. Instead, the researchers use ion-concentration polarization (ICP) and electrodialysis (ED) to separate the salt from the water. “Instead of filtering, we are nudging the contaminants [in this case, salt] away from the water,” Han says. This portable unit, he adds, is a good demonstration of the effectiveness of ICP desalination technology. “It is quite different from other technologies, in the sense that I can remove both large particles and solids all together.” The setup includes a two-stage ion-concentration polarization (ICP) process, with water flowing through six modules in the first stage and then three in the second stage, followed by a single electrodialysis process.M. Scott Brauer ICP uses an ion-selective membrane that allows the passage of one kind of ion when current is applied—either cations or anions. “What happens is that, [if] these membranes can transfer only cations, what about the anions?” Han asks. “The anions disappear near the membrane because nature really doesn’t like free ions hanging around…. So, [as a result, there is a region] near the membrane that is salt-free.” The salt-free region is the spot from which freshwater is harvested. “What is unique about our technology is that we figured out a way to separate…a diverse array of contaminants [from water] in a single process,” says Han. “So we can go [straight] from seawater to drinkable water.” It takes 40 liters of seawater to yield a single liter of drinking water. This 2.5 percent recovery rate might seem like a high environmental cost, says Junghyo Yoon, a researcher at Han’s lab. But Yoon reminds us that seawater is an infinite resource, so a low recovery rate is not a significant issue. The portable device does not require any replacement filters, which greatly reduces the long-term maintenance requirements.M. Scott Brauer The MIT group’s device is an out-of-the box system; you can just power it up, connect it to a saltwater source, and wait for potable water. “The box includes the battery and…[it is] like a typical laptop battery, anywhere between 60 and 100 watts,” Han says. “We think that that can operate for about a day or so.” A solar panel is another option, especially in a disaster zone, where there might not be an accessible electric power source. Yoon points out that the results reported in the group’s paper are already a year old. “[Since we recorded the results listed in the paper], we have successfully ramped up the desalination rate to 1 liter [of freshwater] per hour,” he reports. “We are pushing ourselves to scale up to 10 liters per hour for practical applications.” He hopes to secure enough investment by the end of this year to take the next steps toward commercialization. “We expect that we can have the first prototype available for beta testing by the end of 2023. [We predict that] he cost will be [US] $1,500,” says Yoon. That price will be far cheaper than portable desalination systems currently on the market—mostly models using reverse-osmosis filtration, which go for around $5,000. “Although they have higher flow rates and generate a larger amount of clean water because [they] are bigger, they are generally not so user friendly,” Han says. “Our system is much smaller, and uses much less power. And the goal here is to generate just enough water, in a manner that is very user friendly to address this particular need of disaster relief.” Aside from the flow rate, Han is also not happy with the device’s energy consumption at present. “We don’t think is actually optimal,” he says. “Although [its energy efficiency] is good enough, it can always be made better by optimizing the [process].”

  • Can the Artemis Moon Mission Revive the Glamour of Big Tech?
    by Rodney Brooks on 17. May 2022. at 15:00

    Today, the phrase “big tech” typically resonates negatively. It conjures up disturbing aspects of social media and the rise of megacorporations that seem beyond the reach of the law. And yet decades ago, big tech was typically associated with the glamor of motion: of speed, of power, and the thrill of exploring new frontiers. Two leaders, Wernher von Braun and Juan Trippe, became household names as they made bold bets that paid off and enabled people to go where few thought it possible not long before. Von Braun had a troubling history: As a 30-year-old, he had convinced Adolf Hitler to fund his V-2 missiles, of which thousands were built, with slave labor. They rained down on Paris, London, and other cities, killing 9,000 people, mostly civilians. But when von Braun’s Apollo program came to fruition, in the late 1960s, huge crowds gathered every few months on the Florida coast to watch the thundering Saturn V rockets take off. It was a partylike atmosphere and a joyous time. We humans were going to the moon, making a connection that had seemed both improbable and impossible just a few years before. Pan Am’s landing in Manila marked the start of globalism, of our modern connectivity. The rapturous crowds in Florida gathered during a turbulent time, with popular culture dominated by sex, drugs, rock and roll, and pervasive antiestablishment sentiment. And yet, in that unsettled era, techno-optimism somehow took root. The most religious experience I have ever had was during the astronauts’ live-to-Earth reading of Genesis from orbit around the moon on Christmas Eve, 1968. Few probably realize that the big Apollo gatherings had a clear precedent in another mass outpouring of hope about large-scale human adventure. It occurred around San Francisco Bay at the height of the Great Depression, 30 years before the Apollo landings. Another indomitable spirit, Trippe, the president of Pan Am, was betting against all odds that he could open transpacific passenger service, and make it real on a timetable that no sober advisor believed possible. This Martin M-130 flying boat, called the China Clipper, took off on a test flight from San Francisco Bay in October 1935—one month before it made the first commercial transpacific flight. FPG/Archive Photos/Getty Images On 22 November 1935, the first transpacific commercial flight took to the skies. At 3:46 p.m., a Martin M-130 flying boat, the largest passenger plane built up to that point, lumbered into the air from Pan Am's base in Alameda, Calif., on the east side of San Francisco Bay. On that first trip, the plane carried only mail under contract to the U.S. post office. Over 100,000 people had gathered around the bay to watch. Captain Edwin Musick powered the "China Clipper" northward in the bay and up over the waves. He planned to fly over the Bay Bridge, the double suspension bridge that today spans the bay and links San Francisco with Oakland. But he couldn't gain altitude quickly enough with the 4,000 gallons of fuel he was carrying. Fortunately, the roadway had not yet been hung from the catenaries, and he managed to fly under them. When he got to the Golden Gate Bridge, also under construction at the time, he just barely managed to get above it. On the Clipper’s very first return flight from Honolulu to San Francisco, disaster was very narrowly averted when the big seaplane landed in the bay with just 1 minute of fuel left in its tanks. When Musick and his crew arrived in the Philippines six days later, there were 200,000 people cheering wildly in Manila Harbor. Musick hand-delivered a letter from President Franklin D. Roosevelt to President Manuel Quezon of the Philippines. Quezon told Musick, "You have swept away forever the distance which from the beginning of time has separated the great continent of America from the beautiful islands of the Pacific." Pan Am’s landing in Manila marked the start of globalism, of our modern connectivity. Today, as NASA’s Artemis mission heralds a new era of human space exploration, it’s important to remember how much difficulty and serendipity there was on the way to Manila and the moon. On the Clipper’s first return flight from Honolulu to San Francisco, disaster was very narrowly averted when the big seaplane landed in the bay with just 1 minute of fuel left in its tanks. Von Braun could have easily ended up dead or in Russia rather than in the United States. Similarly, there will be many twists and turns on the way to the moon and Mars. The obvious successor to Trippe and von Braun would now seem to be Elon Musk—but maybe it’ll be someone else. Regardless, and despite our divisions and perhaps even future pandemics, people will undoubtedly come out in droves once again to witness the takeoffs and landings. At last, big tech will again be something to celebrate. This article appears in the June 2022 print issue as “22 November 1935: The Day Globalism was Born.”

  • Before Ships Used GPS, There Was the Fresnel Lens
    by Joanna Goodrich on 16. May 2022. at 18:00

    Ships today use satellite-based radio navigation, GPS, and other tools to prevent accidents. But back at the beginning of the 19th century, lighthouses guided ships away from rocky shores using an oil lamp placed between a concave mirror and a glass lens to produce a beam of light. The mirrors were not very effective, though, and the lenses were murky. The light was difficult to see from a distance on a clear night, let alone in heavy fog or a storm. In 1822 French civil engineer Augustin-Jean Fresnel (pronounced “Frey Nel”) invented a new type of lens that produced a much stronger beam of light. The Fresnel lens is still used today in active lighthouses around the world. It also can be found in movie projectors, magnifying glasses, spacecraft, and other applications. Fresnel’s technical achievement is worthy of being named an IEEE Milestone, according to the IEEE History Center, but no one has proposed it yet. Any IEEE member can submit a milestone proposal to the IEEE History Center. The Milestone program honors significant accomplishments in the history of electrical and electronics engineering. PREVENTING SHIPWRECKS Because of increasing complaints from French fishermen and ship captains about the poor quality of the light emanating from lighthouses, in 1811 the French Commission on Lighthouses established a committee under the authority of the Corps of Bridges and Roads to investigate how lighthouse illumination could be improved. One member of that committee was Fresnel, who worked for the French civil service corps as an engineer. He had considerable expertise in optics and light waves. In fact, in 1817 he proved that his wave theory—which stated the wave motion of light is transverse rather than longitudinal—was correct. In transverse waves, a wave oscillates perpendicular to the direction of its travel. Longitudinal waves, like sound, oscillate in the same direction that the wave travels. Fresnel’s analysis of contemporary lighthouse technology found the lenses were so thick that only half the light produced shined through. He decided he could do better using his wave theory. His design consisted of 24 glass prisms of varying shapes and sizes arranged in concentric circles within a wire cage. The prisms, placed both in front of and behind four oil lamps, replaced both the mirror and the glass lens of the previous method. Prisms at the edge of the circle refract light slightly more than those closer to the center, so the light rays all emerge in parallel. The design could focus nearly 98 percent of the rays generated by the lamps, producing a beam that could be seen more than 32 kilometers away. Inside the Lindesnes Lighthouse's Fresnel lens in southern Norway.DeAgostini/Getty Images A clock mechanism, which had to be wound by hand every few hours, was used to revolve the metal frame around the lamps to produce unique light patterns for specific lighthouses. A lighthouse could send out a flash regularly every 5 seconds, for example, or it could have a 10-second period of darkness and a 3-second period of brightness. Captains counted the number of flashes sent out by a lighthouse to calculate their ships’ location. The lenses came in several sizes, known as orders. The largest order, the Hyper-Radial, had a 1,330-millimeter diameter. The smallest, the eighth order, had a 75-mm diameter and could be found in lighthouses on bays and rivers. In 1823 the French Commission on Lighthouses committee approved the use of the Fresnel lens in all lighthouses in France. That same year, the first one was installed in the Cordouan Lighthouse, in southwestern France. The lens eventually was adopted in other countries. By the 1860s, all the lighthouses in the United States had been fitted with a Fresnel lens, according to the Smithsonian Institution. Fresnel continued to modify the lens for several years. His final design, which he completed in 1825, could spin 360 degrees and was the first so-called fixed/flashing lens. It produced a fixed light followed by a brilliant flash followed by another fixed light. With the invention of modern navigational tools, the lighthouse has become largely obsolete for maritime safety. But the lens invented for it lives on in side mirrors used on trucks, solar panels, and photographic lighting equipment. If you are interested in submitting a proposal, do so here.The History Center is funded by donations to the IEEE Foundation. For more on the history of lighthouse technology, visit the U.S. National Park Service, Ponce Inlet Lighthouse and Museum, and American Physical Society websites.

  • Modeling Microfluidic Organ-on-a-Chip Devices
    by COMSOL on 16. May 2022. at 15:23

    If you want to enhance your modeling and design processes for microfluidic organ-on-a-chip devices, tune into this webinar. You will learn methods for simulating the performance and behavior of microfluidic organ-on-a-chip devices and microphysiological systems in COMSOL Multiphysics. Additionally, you will see how to couple multiple physical effects in your model, including chemical transport, particle tracing, and fluid–structure interaction. You will also learn how to distill simulation output to find key design parameters and obtain a high-level description of system performance and behavior. There will also be a live demonstration of how to set up a model of a microfluidic lung-on-a-chip device with two-way coupled fluid–structure interaction. The webinar will conclude with a Q&A session. Register now for this free webinar!

  • Why These Members Donate to the IEEE Foundation
    by IEEE Foundation on 13. May 2022. at 18:00

    The IEEE Foundation partners with donors to enable more than 250 IEEE programs to help advance technology for the benefit of humanity. The Foundation’s support is made possible, in part, by the gifts from its generous donors. The IEEE Heritage Circle recognizes individual donors’ philanthropic spirit. Each level of the Heritage Circle is named for one of six great innovators in the fields of science and technology: Nikola Tesla, Alexander Graham Bell, Thomas Alva Edison, James Clerk Maxwell, Michael Faraday, and Alessandro Volta. Here is a look at some of those who have made contributions and what programs they support. THE TESLA LEVEL Members at this level have pledged more than US $10,000. The Tesla level is the most common among givers. Henry Samueli BROADCOM Henry Samueli, cofounder of Broadcom and recipient of more than 75 U.S. patents, is passionate about encouraging and supporting the next generation of engineers. While he was an engineering professor at the University of California, Los Angeles, Samueli helped found company in San Jose, Calif., in 1991 with one of his Ph.D. students, Henry T. Nicholas. Samueli contributes to IEEE–Eta Kappa Nu, the honor society. The IEEE Life Fellow is a member of IEEE-HKN’s Iota Gamma Chapter, and in 2019, he received the honor society’s highest recognition: eminent member. Hei received the 2021 IEEE Founders Medal for “leadership in research, development, and commercialization of broadband communication and networking technology with global impact.” He donated the full cash prize of $10,000 to the IEEE Foundation’s Awards Program Fund. Steve Wozniak, a Silicon Valley icon, cofounded Apple with Steve Jobs in 1976 and helped shape the computing industry by engineering the groundbreaking Apple I and Apple II computers, as well as the Macintosh. What people might not know is that Wozniak is also a member of IEEE-HKN’s Mu Chapter. The IEEE Fellow received the 2021 IEEE Masaru Ibuka Consumer Electronics Award “for pioneering the design of consumer-friendly personal computers.” He donated his $10,000 prize to IEEE-HKN. IEEE Member Mary Ellen Zellerbach’s investing success is impressive. As part of the original pioneering index fund team at Wells Fargo Advisors, she introduced and managed the first international index fund. Zellerbach also played a key role in placing the first Standard & Poor’s 500 index futures trade on behalf of a U.S. institutional investor. She is currently managing director of Martin Investment Management, a majority-women-owned firm in Evanston, Ill. After 11 years of serving on the IEEE Investment Committee, Zellerbach was elected 2022 director of the IEEE Foundation Board. She donates to the Foundation in support of its fund and other programs. Even though IEEE Life Member Jack Jewell has been active with IEEE for more than 30 years, he says his mouth still waters when he receives the latest Photonics Technology Letters in the mail. So when Jewell won the prestigious IEEE Photonics Award last year from the IEEE Photonics Society for his “seminal and sustained contributions to the development and commercialization of vertical-cavity surface-emitting lasers,” he decided to give back to the organization that has given him so much. “Contributing to IEEE extends our capabilities beyond our own personal professions,” Jewell said when asked why he donated the $10,000 cash prize. “I hope that the donations will enhance people’s lives, both professionally and personally.” His donation is helping to support IEEE REACH (Raising Engineering Awareness through the Conduit of History), the IEEE Foundation Fund, the IEEE Photonics Society, and the IEEE Awards program. THE ALEXANDER GRAHAM BELL LEVEL These members have pledged more than $50,000. Lewis “Lew” TermanIEEE Foundation Lewis “Lew” Terman joined the Institute of Radio Engineers, one of IEEE’s predecessor societies, as a student member in 1958 at the suggestion of his father, who was then the IRE president. The IEEE Life Fellow, who served as 2008 IEEE president, has been a member ever since. When speaking about his history with the organization, Terman says that one could say “IEEE is in my genes.” Terman and his wife, Bobbie, have been steadfast supporters of the IEEE Foundation’s programs throughout the years. Their donations have supported IEEE Smart Village, which brings electricity, education, and economic development to energy-deprived communities around the world, and EPICS (Engineering Projects in Community Service) in IEEE. EPICS empowers students to apply technical solutions to aid their communities. THE THOMAS ALVA EDISON LEVEL These members have pledged more than $100,000. In celebration of his 50-year anniversary as a member of IEEE and IEEE-HKN last year, John McDonald and his wife, Jo-Ann, have made a four-year pledge split between two programs to the IEEE Foundation. One of their gifts will go to the IEEE Power & Energy Society’s Scholarship Plus Initiative, which nurtures budding power and electrical engineers through scholarships, mentoring opportunities, and internships. The program is especially important to McDonald, as he spent five years volunteering on the initiative’s scholarship selection committee. McDonald is a distinguished lecturer for IEEE PES and teaches classes on the smart grid. The McDonalds’ other gift supports the IEEE–Eta Kappa Nu Student Chapter Support Fund. The new program aims to generate money and develop training for HKN chapters. The fund also supports the creation of a chapter grant program and chapter coaching program. The initiative is near and dear to his heart, McDonald says, as he has been a member of IEEE-HKN’s Beta Chapter since 1971. He joined as an undergrad at Purdue University, in West Lafayette, Ind. John McDonald and Bahman HoveidaIEEE Foundation THE ALESSANDRO VOLTA LEVEL These members have pledged more than $1 million. Bahman Hoveida joined IEEE as a student member at the suggestion of one of his electrical engineering professors at the University of Illinois Urbana-Champaign. The IEEE life senior member created the Hoveida Family Foundation, in Bainbridge Island, Wash., to support the next generation of electrical engineers. The nonprofit awards grants to support scientific and academic programs in the United States. The Hoveida Family Foundation also donates to the IEEE PES Scholarship Plus Initiative. Of the 72 high-achieving power and energy engineering students to be named a 2021–2022 PES Scholar, 33 are Hoveida Foundation scholars. CONTINUING IMPORTANT WORK To become part of the IEEE Heritage Circle, visit the IEEE Foundation’s website or send a message to donate@ieee.org. To learn more about Foundation donors, programs, scholarships, and grants, follow the Foundation on LinkedIn, Facebook, Twitter or IEEE Collabratec.

  • Video Friday: Automotive Artistry
    by Evan Ackerman on 13. May 2022. at 17:36

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ICRA 2022: 23 May–27 May 2022, PHILADELPHIA IEEE ARSO 2022: 28 May–30 May 2022, LONG BEACH, CALIF. RSS 2022: 21 June–1 July 2022, NEW YORK CITY ERF 2022: 28 June–30 June 2022, ROTTERDAM, NETHERLANDS RoboCup 2022: 11 July–17 July 2022, BANGKOK IEEE CASE 2022: 20 August–24 August 2022, MEXICO CITY CLAWAR 2022: 12 September–14 September 2022, AZORES, PORTUGAL Enjoy today's videos! ABB Robotics has collaborated with two world-renowned artists—8-year-old Indian child prodigy Advait Kolarkar and Dubai-based digital-design collective Illusorr—to create the world’s first robot-painted art car. ABB’s award-winning PixelPaint technology has, without human intervention, perfectly recreated Advait’s swirling, monochromatic design as well as Illusorr’s tricolor geometrical patterns. [ ABB ] Working closely with users and therapists, EPFL spin-off Emovo Care has developed a light and easy-to-attach hand exoskeleton for people unable to grasp objects following a stroke or accident. The device has been successfully tested in several hospitals and rehabilitation centers. This is pretty amazing, because it’s not just a research project—it’s actually a product that's helping patients. If you think this might be able to help you (and you live in Switzerland), Emovo is currently offering free trials. [ Emovo Care ] via [ EPFL ] Thanks, Luca! Uh, I don’t exactly know where this research is going, but the fact that they’ve got a pair of robotic legs that are nearly 2 meters tall is a little scary. [ KIMLAB ] The most impressive thing about this aerial tour of AutoX’s Pingshan RoboTaxi Operations Center is that AutoX has nine (!) more of them. [ AutoX ] In addition to delivering your lunch, Relay+ will also magically transform plastic food packaging into more eco-friendly cardboard. Amazing! [ Relay ] Meet Able Mabel, the incredible robotic housekeeper, whose only function is to make your life more leisurely. Yours for just £500. Too good to be true? Well, in 1966 it is, but if Professor Thring at the department of mechanical engineering of Queen Mary College has his way, by 1976 there could be an Able Mabel in every home. He shows us some of the robotic prototypes he has been working on. This clip is from “Tomorrow's World,” originally broadcast 16 June 1966. [ BBC Archive ] I find the sound effects in this video to be very confusing. [ AgileX ] The first part of this video is extremely satisfying to watch. [ Paper ] via [ AMTL ] Thanks to this unboxing video of the Jueying X20 quadruped, I now know that it’s best practice to tuck your robot dog in when you’ve finished playing with it. [ Deep Robotics ] As not-sold as I am on urban drone delivery, I will grant you that Wing is certainly putting the work in. [ Wing ] GlobalFoundries, a global semiconductor manufacturer, has turned to Spot to further automate their data collection for condition monitoring and predictive maintenance. Manufacturing facilities are filled with thousands of inspection points, and adding fixed sensors to all these assets is not economical. With Spot bringing the sensors to their assets, the team collects valuable information about the thermal condition of pumps and motors, as well as taking analog gauge readings. [ Boston Dynamics ] The Langley Aerodrome No. 8 (LA-8) is a distributed-electric-propulsion, vertical-takeoff-and-landing (VTOL) aircraft that is being used for wind-tunnel testing and free-flight testing at the NASA Langley Research Center. The intent of the LA-8 project is to provide a low-cost, modular test bed for technologies in the area of advanced air mobility, which includes electric urban and short regional flight. [ NASA ] As social robots become increasingly prevalent in day-to-day environments, they will participate in conversations and appropriately manage the information shared with them. However, little is known about how robots might appropriately discern the sensitivity of information, which has major implications for human-robot trust. As a first step to address a part of this issue, we designed a privacy controller, CONFIDANT, for conversational social robots, capable of using contextual metadata (for example, sentiment, relationships, topic) from conversations to model privacy boundaries. [ Paper ] The Shenzhen Institute of Artificial Intelligence and Robotics for Society (AIRS) is hosting a series of special talks on modular self-reconfigurable robots, starting with Mark Yim and Kirstin Petersen. Subscribe to the AIRS YouTube channel for more talks over the next few weeks! [ AIRS ] Thanks, Tin Lun!

  • Andrew Ng: Unbiggen AI
    by Eliza Strickland on 9. February 2022. at 15:31

    Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A. Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias. Andrew Ng on... What’s next for really big models The career advice he didn’t listen to Defining the data-centric AI movement Synthetic data Why Landing AI asks its customers to do the work The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way? Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions. When you say you want a foundation model for computer vision, what do you mean by that? Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them. What needs to happen for someone to build a foundation model for video? Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision. Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries. Back to top It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users. Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation. “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.” —Andrew Ng, CEO & Founder, Landing AI I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince. I expect they’re both convinced now. Ng: I think so, yes. Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.” Back to top How do you define data-centric AI, and why do you consider it a movement? Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data. When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline. The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up. You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them? Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn. When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set? Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system. “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.” —Andrew Ng For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance. Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training? Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle. One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way. When you talk about engineering the data, what do you mean exactly? Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity. For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow. Back to top What about using synthetic data, is that often a good solution? Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development. Do you mean that synthetic data would allow you to try the model on more data sets? Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category. “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.” —Andrew Ng Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data. Back to top To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment? Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data. One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory. How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up? Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations. In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists? So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work. Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains. Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement? Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it. Back to top This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”

  • How AI Will Change Chip Design
    by Rina Diane Caballar on 8. February 2022. at 14:00

    The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process. Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version. But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform. How is AI currently being used to design the next generation of chips? Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider. Heather GorrMathWorks Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI. What are the benefits of using AI for chip design? Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design. So it’s like having a digital twin in a sense? Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end. So, it’s going to be more efficient and, as you said, cheaper? Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering. We’ve talked about the benefits. How about the drawbacks? Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it's not going to be as accurate as that precise model that we’ve developed over the years. Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It's a case where you might have models to predict something and different parts of it, but you still need to bring it all together. One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge. How can engineers use AI to better prepare and extract insights from hardware or sensor data? Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start. One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI. What should engineers and designers consider when using AI for chip design? Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team. How do you think AI will affect chip designers’ jobs? Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip. How do you envision the future of AI and chip design? Gorr: It's very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.

  • Atomically Thin Materials Significantly Shrink Qubits
    by Dexter Johnson on 7. February 2022. at 16:12

    Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality. IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability. Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100. “We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.” The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit. Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C). Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another. As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance. In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates. “We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics. On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas. While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor. “What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.” This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits. “The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang. Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.