IEEE Novosti

IEEE Spectrum IEEE Spectrum

  • Being More Inclusive is Paying Off for This IEEE Society
    by Lisa Lazareck-Asunta on 3. Decembra 2021. at 19:00

    Small changes made over time can lead to big results, the saying goes. A great example of that is the concerted effort the IEEE Instrumentation and Measurement Society started more than two decades ago to become a more welcoming and inclusive environment for women and members from outside the United States. Since 2012, the society has increased the number of female members by more than 60 percent. And more articles are now submitted from authors in China, India, and Italy than from North America. “We tackled one diversity factor at a time,” says IEEE Senior Member Ferdinanda Ponci, the society’s liaison to IEEE Women in Engineering (WIE), a position established more than 10 years ago to coordinate joint activities and programs. Ponci says being a liaison to the WIE committee has been a key factor for her engagement with diversity, equity, and inclusion activities. “This amazing group is a constant motivator and my reality check that women’s participation, career advancement, and DEI are very real, life-changing missions,” she says. Ponci is one of the people involved in the society’s efforts. She is also a member at large of the society’s administrative committee and conference treasurer. “I think we have a very nice representation at all levels now,” Ponci says, though she adds that there is still more to do. She is a professor and researcher in area monitoring and distributed control with the Institute for Automation of Complex Power Systems at RWTH Aachen University, in Germany. Ponci recently spoke with The Institute about how she and her colleagues improved diversity in the Instrumentation and Measurement Society. She said it was done “intentionally, strategically, and systematically” as the 2016–2017 society president, Ruth Dyer, reminded Ponci and others on the society’s AdCom. Dyer is currently the IEEE Division II director. DETERMINED ADVOCATES During the past 20 years, the society has increased the number of women who present and lead sessions at its conferences, workshops, and symposia; hold leadership positions; and serve on technical program committees. The push came from advocates among the society’s officers who were determined to increase representation, Ponci says. WIE Pledge Update Lisa Lazareck-Asunta The IEEE Women in Engineering pledge was launched on 23 April with a commitment from four IEEE societies: Computational Intelligence, Computer, Engineering in Medicine and Biology, and Power & Energy. Since then, much progress has been made. As of press time 21 IEEE organizational units (OUs) had confirmed their commitment to the pledge, and 13 more were considering it. That includes regions, sections, societies, councils, and one student branch. Many OUs have broadened the scope of the pledge to encompass other aspects of diversity and inclusion that go beyond gender representation. OUs are making the pledge visible through their membership communications and websites. Some are working to make it part of their bylaws. WIE is collaborating with the IEEE Technical Activities Board’s diversity committee and IEEE conferences to ascertain how practices pertaining to the implementation of the pledge can best be achieved, shared, and improved across IEEE. If your OU is interested in joining the community of pledge-takers and changemakers, contact the WIE office. Taking the WIE pledge is just the first step on the road for more visible and equitable representation on panels at IEEE meetings, conferences, and events. In 1992 there was only one member of the society’s AdCom from outside the United States and Canada, and no women. That was the situation when Dyer’s husband, Stephen A. Dyer, joined the AdCom as editor in chief of the IEEE Transactions on Instrumentation and Measurement. The IEEE Life Fellow began the effort to identify more women and individuals from other geographic regions to include as candidates on the AdCom ballot. True discussion on diversity, equity, and inclusiveness started then and continues, Ponci says. Ruth Dyer was elected to the AdCom in 1999, and served in her first officer role in 2007. A second woman wasn’t elected to the committee until 2009. Between 2010 and 2021, the society has had between six and nine female voting members, Ponci says. They have included elected members, officers not in a current elected position, and appointed representatives. Some were recruited from conference attendees. Beginning in 2007, IEEE Fellow Reza Zoughi, who would later be elected the society’s 2014–2015 president, began nominating many of his current and former students for the appointed positions. He continues to be a strong advocate for diversity. The society has had at least one woman appointed as the undergraduate, graduate, or Young Professionals representative almost every year since 2007, Ponci says. That is important, she says, because the appointees have voting rights. Ruth Dyer coordinated the first informal networking session for women in 2006 at the society’s flagship event, the IEEE International Instrumentation and Measurement Technology Conference. There was more involvement by women as speakers and technical program committee chairs at this year’s conference, Ponci says. Since 2012, Women in Measurement events have taken place regularly as formal parts of the program. The society’s nominations and appointments committee saw to it that experienced women were nominated for leadership positions. The idea was that the officers would in turn identify other talented women for committee appointments. “It became normal, and it was expected that committees would be more diverse,” Ponci says. “I think this was emotionally and culturally a big change.” Those efforts would not have been possible without visible support from the society’s leaders including the Dyers and Zoughi, along with many other officers who advocated for diversity and supported networking events for women at the society’s conferences, Ponci says. “They attended these events and encouraged other male and female members of the AdCom to attend to show support of the society,” she says. “We need more male advocates because without them, it looks like it’s a ‘woman’s thing,’ and it’s not.” Women and individuals from a diversity of geographic areas who have technical expertise in instrumentation and measurement were encouraged to publish their research papers in the society’s publications and to serve as reviewers and associate editors. iStockphoto/IEEE IEEE Instrumentation & Measurement Magazine marked the contributions and achievements of the society’s female members and field experts in its June 2016 special issue. In her president’s message, Dyer wrote: “I am always impressed by the strength and excellence that result when we embrace and encourage diversity. Time and again, we discover that the most robust solutions are achieved when a plethora of perspectives are sought and incorporated. As the science and engineering disciplines continue to direct their attention and efforts toward increased inclusion, we know our Instrumentation and Measurement Society will continue to thrive and grow, because we are committed to fostering and reaping the benefits of an inclusive society.” In June the society joined 21 other IEEE organizational units that took the IEEE WIE pledge to work toward “gender-diversified panels at all IEEE meetings, conferences, and events.” GEOGRAPHIC DIVERSITY To increase global representation within the society, it turned its attention to IEEE Regions 8, 9, and 10. “We started the geographic diversity effort purposefully,” Ponci says. The society used the same strategy as it had with women: Get more qualified people from other regions on the ballot and in member-at-large positions; push them to become associate editors and reviewers of papers; and increase their representation on editorial boards. Researchers from Africa, Asia, Europe, Latin America, and the Middle East were encouraged to submit papers to the society’s publications. Prior to 1997, there had been only one or two elected AdCom members from outside the United and Canada, Ponci says. From 1997 to 2009, however, at least one of the four representatives elected each year was from Region 8, 9, or 10. That number increased to three from 2010 to this year. The first representative from Region 9, IEEE Senior Member Jorge F. Daher, went on to become the society’s 2012 president. The society elected the second representative from Region 10 in 2010 and one each year since. From 2017 on, IEEE Fellow Shervin Shirmohammadi, editor in chief of the IEEE Transactions on Instrumentation and Measurement, has pushed for the inclusion of individuals from underrepresented geographic areas among associate editors. Region 10, which covers Asia and the Pacific Rim and hosts the largest community of instrumentation and measurement technologists, was targeted first. Most of the current editorial board members for the Transactions come from the region. “To be able to select the best of the best from a large group of submitters is more than we could hope for,” Ponci says. REAPING THE BENEFITS Ponci says the efforts to improve gender and geographic diversity are paying off. “We managed to attract the most motivated and active individuals in every area” of the society, she says. “The society’s activities have increased and improved.” Discussions with members are now a “mix of commonalities and differences,” she adds. “Of course, this requires people to really want to listen and not dismiss the point of view of others. This is something that diversity and inclusion really pushes. It makes you stop and listen before you dismiss and before you judge.”

  • When They Electrified Christmas
    by Allison Marsh on 3. Decembra 2021. at 16:51

    In much of the world, December is a month of twinkling lights. Whether for religious or secular celebrations, the variety and functionality of lights have exploded in recent years, abetted by cheap and colorful LEDs and compact electronics. Homeowners can illuminate the eaves with iridescent icicles, shroud their shrubs with twinkling mesh nets, or mount massive menorahs on their minivans. But decorative lights aren't new. On 22 December 1882, Edward H. Johnson, vice president of the Edison Electric Light Co., debuted electric Christmas lights when he lit up 80 hand-wired bulbs on a tree in the parlor of his New York City home. Johnson opted for red, white, and blue bulbs, and he mounted the tree on a revolving box. As the tree turned, the lights blinked on and off, creating a "continuous twinkling of dancing colors," as William Croffut reported in the Detroit Post and Tribune. (Croffut's account and many other wonderful facts about the history of decorative lights are available on the website Old Christmas Tree Lights.) Two years later, Johnson's growing display caught the attention of the New York Times. His tree now boasted 120 lights and more colors. He also created a fire "burning" in the fireplace; in reality, it was colored paper illuminated from below by electric lights. Johnson was part early adopter, part showman, and part publicist for the possibilities of electricity. But electric Christmas lights were not exactly practical at the time because most homes were not yet wired for electricity. Lighting had to be hand wired, and there was no standardization in bulbs or sockets. For the next several decades, decorative lights remained the playthings of the wealthy elite. In 1903, GE rocked the home-decorating world by introducing the first strings of prewired light sockets. Meanwhile, manufacturers continued to market new products. In 1890, the Edison Lamp Co. began advertising miniature incandescent lamps that could be interwoven into garlands or used for decorating Christmas trees. A 1900 advertisement from General Electric (formed through the 1892 merger of Edison General Electric and Thomson-Houston Electric) touted the advantages of electric Christmas lighting over gas and candles: "No Danger, Smoke or Smell." Customers not quite willing to commit to the lights could rent them for the season. In 1903, GE rocked the home-decorating world by introducing the first strings of prewired light sockets. The GE lights came in strands, or festoons, of 8, 16, 24, or 32 bulbs. Additional 8-bulb festoons could be added as desired. The sets included 50 feet (15 meters) of flexible cord and screwed into a lamp socket. The lights were connected in series, so if a bulb burned out, the rest of the line went dark. Detailed instructions described how to troubleshoot problems if the lights did not shine. A 24-light set cost US $12 (about $325 today) and could illuminate a medium table-top tree. The GE lights were still out of reach for the average consumer, but prices quickly dropped as competitors entered the market. Just four years later, Excelsior Supply Co. of Chicago advertised "Winking Fairy Lights for Christmas Trees" in Hardware Dealers' Magazine. Each 8-bulb set cost only $5, with a wholesale price of $3.50. U.S. President Grover Cleveland was an early adopter of the electrified Christmas tree, as shown in this 1896 photo. White House Historical Association The earliest Christmas lights used miniature versions of Edison's pear-shaped carbon-filament bulbs, which had fragile exhaust tips that were prone to breaking. In 1910 GE changed its basic design to a round bulb, although still with the exhaust tip. In 1916, GE introduced tungsten filaments and gave the new design the Mazda trademark. Two years later, they removed the tips, making the bulbs more fully round. In 1919 GE changed its bulb shape again, this time to the familiar cone resembling a candle flame. This bulb shape remained in use until the late 1970s and more recently has resurfaced as a "vintage" style. (Anyone interested in dating old sets of lights can consult the Old Christmas Tree Lights website, which was originally created by the brothers Bill and George Nelson.) Just as the "typical" bulb was evolving, more decorative figural bulbs were arriving on the scene. By 1908, the Sears and Roebuck catalog featured a set of a dozen painted glass bulbs shaped like small fruits and nuts for $2.75. In 1910, an article in Scientific American described Christmas bulbs in the shape of flowers, animals, snowmen, angels, and Santa Claus. The 1919 Decorative Lamps catalog of the American Ever Ready Co. (precursor to the Eveready Battery Co.) branched out to bulbs of St. Patrick, Halloween pumpkins, clowns, and policemen. Austria, Germany, and Japan became famous for exporting figural bulbs. These novelty bulbs had their challenges. Depending on the orientation of the base, some figures hung perpetually upside down. The paint could easily flake or wear off. Some of the designs are just creepy—like the two-faced doll's head shown at top. But I would happily decorate my home with many of these antique bulbs. To me, they're warmer and more inviting than some of the brash lights of today. A couple who collected Christmas lights and so much more While researching this month's column, I stumbled upon the doll's head bulb while scrolling through the online collections of the Smithsonian Institution. I had hoped to uncover more information about the object, but the online records had limited details. I did, however, learn a few things about the donor. The lightbulb came to the Smithsonian Institution's National Museum of American History in 1974 through a bequest from Edith R. Meggers, who had died the previous year. Altogether, Meggers donated more than 800 objects , including badges and pins, dollhouse furniture, toys and games, calculators, typewriters, and electrical insulators. Meggers shared a passion for travel and collecting with her husband, William. They displayed their treasures in their home, which they dubbed "The Meggers Museum of Technology." Their collecting habits were well known in the Washington, D.C., area, and the Washington Post ran a feature article on them in 1941 with the subheading "They Collect Just About Anything You Can Name." Edith Meggers donated more than 800 objects , including badges and pins, dollhouse furniture, toys and games, calculators, typewriters, and electrical insulators. Collecting was just a hobby, though. Edith Meggers worked in the Building Technology Division of the National Bureau of Standards (NBS), which is where she met her husband; they married in 1920. Although Edith's name might not be familiar to Spectrum readers, her husband's may be. William F. Meggers was the chief of the Spectroscopy Section at NBS. When William Meggers first came to NBS, spectroscopy was still a developing technique, and so he spent the first few years studying the process. His early papers examined the influence of the physical condition of samples, the method of excitation, and the apparatus used to obtain and record the spectral data. In 1922, Meggers published an influential paper (along with C.C. Kiess and F.J. Stimpson) on the use of spectrography in chemical analysis. Over his long career, he studied the spectra of 50 elements, establishing international standards for measurement. How a museum curator curates This 1895 ad equates electric lights with wealth and conviviality. History of Advertising Trust/Heritage Images/Getty Images Of course, being a famous physicist and an avid collector doesn't fully explain how the Meggers' Christmas lights and other objects found their way to the Smithsonian. The bequest of Edith Meggers was handled by her attorney and involved a dozen different curators because the collection spanned several divisions. Unfortunately, the donation records are sealed to researchers until 2032. But it is possible that Edith and Bill's collecting habits wore off on their daughter, Betty Jane Meggers. She earned her Ph.D. in archaeology from Columbia University in 1952 with a dissertation focusing on Marajó Island, Brazil. She enjoyed a long career with the Smithsonian Institution, where she was the director of the Latin American Archaeology Program at the National Museum of Natural History at the time of her death in 2012. So Betty Jane would have known the process for donating objects of value. As a former curator with the Smithsonian, I remember getting phone calls and emails from potential donors who were cleaning out basements and attics and wondered if their mementos were worthy of inclusion in a national museum collection. Often, curators have to say no due to space limitations. However, if the object's provenance is well documented and the item is in good condition, curators may ask for more information, especially if there is a good story attached to it. I found it fascinating to go down the rabbit hole of the history of the electrification of Christmas. The story is key because curators will have to justify the acquisition, explaining how the object will fit into the museum's collection plan and how it will be used in exhibits, educational programming, or research projects. Having additional contextual materials, such as photos or user manuals, helps make the case for the object's worthiness. Another tip for donating is to make sure you match the item to the proper museum, library, or archive. A national museum might not be the right fit (and even if it's accepted, your item is much more likely to be put into storage and never go on display). Consider regional and local museums as well as specialized institutions. For example, after my father died, I sent a photo of some of his engineering books to Jason Dean, vice president for special collections at the Linda Hall Library, in Kansas City, Mo. The Linda Hall is an independent research library that specializes in science, technology, and engineering. Dean took some of the books, and I had to pay shipping, but I am happy that they found a new home. I'm one of those people who love learning through objects, and I enjoy trying to figure out what stories these old, everyday items can tell. Although I wish I could have learned more about the figurine light that Edith Meggers gifted the National Museum of American History, I found it fascinating to go down the rabbit hole of the history of the electrification of Christmas and to learn more about Edith and her family. Perhaps someday something of mine will end up in a museum—but my curatorial opinion is that the story isn't there yet. Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the December 2021 print issue as "It's a Wonderful Light."

  • Video Friday: Ameca Humanoid
    by Evan Ackerman on 3. Decembra 2021. at 16:45

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!): CSIRO SubT Summit – December 10, 2021 – Online ICRA 2022 – May 23-27, 2022 – Philadelphia, PA, USA Let us know if you have suggestions for next week, and enjoy today's videos. Ameca is the world’s most advanced human shaped robot representing the forefront of human-robotics technology. Designed specifically as a platform for development into future robotics technologies, Ameca is the perfect humanoid robot platform for human-robot interaction. Apparently, the eventual plan is to get Ameca to walk. [ Engineered Arts ] Looks like Flexiv had a tasty and exceptionally safe Thanksgiving! But also kind of lonely 🙁 [ Flexiv ] Thanks, Yunfan! Cedars-Sinai is now home to a pair of Moxi robots, named Moxi and Moxi. Yeah, they should work on the names. But they've totally nailed the beeps! [ Diligent Robotics ] via [ Cedars Sinai ] Somehow we already have a robot holiday video, I don't know whether to be thrilled or horrified. The Faculty of Electrical Engineering of the CTU in Prague wishes you a Merry Christmas and much success, health and energy in 2022! [ CTU ] Carnegie Mellon University's Iris rover is bolted in and ready for its journey to the moon. The tiny rover passed a huge milestone on Wednesday, Dec. 1, when it was secured to one of the payload decks of Astrobotic's Peregrine Lunar Lander, which will deliver it to the moon next year. [ CMU ] This robot has some of the absolute best little feetsies I've ever. Seen. [ SDU ] Thanks, Poramate! With the help of artificial intelligence and four collaborative robots, researchers at ETH Zurich are designing and fabricating a 22.5-metre-tall green architectural sculpture. [ ETH Zurich ] Cassie Blue autonomously navigates on the second floor of the Ford Robotics Building at the University of Michigan. The total traverse distance is 200 m (656.168 feet). [ Michigan Robotics ] Thanks, Bruce! The Mohamed Bin Zayed International Robotics Challenge (MBZIRC) will be held in the UAE capital, Abu Dhabi, in June 2023, where tech innovators will participate to seek marine safety and security solutions to take home more than US$3 million in prize money. [ MBZIRC ] Madagascar Flying Labs and WeRobotics are using cargo drones to deliver essential medicines to very remote communities in northern Madagascar. This month, they delivered the 250 doses of the Janssen COVID-19 vaccine for the first time, with many more such deliveries to come over the next 12 months. [ WeRobotics ] It's... Cozmo? Already way overfunded on Kickstarter. [ Kickstarter ] via [ RobotStart ] At USC's Center for Advanced Manufacturing, we have taught the Baxter robot to manipulate fluid food substances to create pancake art of various user created designs. [ USC ] Face-first perching for fixed wing drones looks kinda painful, honestly. [ EPFL ] Video footage from NASA’s Perseverance Mars rover of the Ingenuity Mars Helicopter’s 13th flight on Sept. 4 provides the most detailed look yet of the rotorcraft in action. During takeoff, Ingenuity kicks up a small plume of dust that the right camera, or “eye,” captures moving to the right of the helicopter during ascent. After its initial climb to planned maximum altitude of 26 feet (8 meters), the helicopter performs a small pirouette to line up its color camera for scouting. Then Ingenuity pitches over, allowing the rotors’ thrust to begin moving it horizontally through the thin Martian air before moving offscreen. Later, the rotorcraft returns and lands in the vicinity of where it took off. The team targeted a different landing spot–about 39 feet (12 meters) from takeoff–to avoid a ripple of sand it landed on at the completion of Flight 12. [ JPL ] I'm not totally sold on the viability of commercial bathroom cleaning robots, but I do appreciate how well the techology seems to work. In the videos, at least. [ SOMATIC ] An interdisciplinary team at Harvard University School of Engineering and the Wyss Institute at Harvard University is building soft robots for older adults and people with physical impairments. Examples of these robots are the Assistive Hip Suit and Soft Robotic Glove, both of which have been included in the 2021-2022 Smithsonian Institution exhibit entitled "FUTURES". [ SI ] Subterranean robot exploration is difficult with many mobility, communications, and navigation challenges that require an approach with a diverse set of systems, and reliable autonomy. While prior work has demonstrated partial successes in addressing the problem, here we convey a comprehensive approach to address the problem of subterranean exploration in a wide range of tunnel, urban, and cave environments. Our approach is driven by the themes of resiliency and modularity, and we show examples of how these themes influence the design of the different modules. In particular, we detail our approach to artifact detection, pose estimation, coordination, planning, control, and autonomy, and discuss our performance in the Final DARPA Subterranean Challenge. [ CMU ]

  • Clever Compression of Some Neural Nets Improves Performance
    by Matthew Hutson on 3. Decembra 2021. at 14:00

    As neural networks grow larger, they become more powerful, but also more power-hungry, gobbling electricity, time, and computer memory. Researchers have explored ways to lighten the load, especially for deployment on mobile devices. One compression method is called pruning—deleting the weakest links. New research proposes a novel way to prune speech-recognition models, making the pruning process more efficient while also rendering the compressed model more accurate. The researchers addressed speech recognition for relatively uncommon languages. To learn speech recognition using only supervised learning, software requires a lot of existing audio-text pairings, which are in short supply for some languages. A popular method called self-supervised learning gets around the problem. In self-supervised learning, a model finds patterns in data without any labels—such as “dog” on a dog image. Artificial intelligence can then build on these patterns and learn more focused tasks using supervised learning on minimal data, a process called fine-tuning. In a speech recognition application, a model might intake hours of unlabeled audio recordings, silence short sections, and learn to fill in the blanks. Somehow it builds internal representations of the data that it can take in different directions. Then in fine-tuning it might learn to transcribe a given language using only minutes of transcribed audio. For each snippet of sound, it would guess the word or words, and update its connections based on whether it’s right or wrong. The authors of the new work explored a few ways to prune fine-tuned speech-recognition models. One way is called OMP (One-shot Magnitude Pruning), which other researchers had developed for image-processing models. They took a pre-trained speech-recognition model (one that had completed the step of self-supervised learning) and fine-tuned it on a small amount of transcribed audio. Then they pruned it. Then they fine-tuned it again. The team applied OMP to several languages and found that the pruned models were structurally very similar across languages. These results surprised them. “So, this is not too obvious,” says Cheng-I Jeff Lai, a doctoral student at MIT and the lead author of the new work. “This motivated our pruning algorithm.” They hypothesized that, given the similarity in structure between the pruned models, pre-trained models probably didn’t need much fine-tuning. That’s good, because fine-tuning is a computationally intense process. Lai and his collaborators developed a new method, called PARP (Prune, Adjust and Re-Prune), that requires only one round of fine-tuning. They’ll present their paper this month, at the NeurIPS (Neural Information Processing Systems) AI conference. The group’s research, Lai says, is part of an ongoing collaboration on low-resource language learning between MIT CSAIL and MIT-IBM Watson AI Lab. PARP starts, Lai says, with a pre-trained speech-recognition model, then prunes out the weakest links, but instead of deleting them completely, it just temporarily sets their strengths to zero. It then fine-tunes the model using labeled data, allowing the zeros to grow back if they’re truly important. Finally PARP prunes the model once again. Whereas OMP fine-tunes, prunes, and fine-tunes, PARP prunes, fine-tunes, and prunes. Pruning twice is computationally trivial comparing to fine-tuning twice. At realistic pruning levels, PARP achieved error rates similar to OMP while using half as many fine-tunings. Another interesting finding: In some setups where PARP pruned between 10% and 60% of a network, it actually improved ASR accuracy over an unpruned model, perhaps by eliminating noise from the network. OMP created no such boost. “This is one thing that impresses me,” says Hung-yi Lee, a computer scientist at National Taiwan University who was not involved in the work. Lai says PARP or something like it could lead to ASR models that, compared with current models, are faster and more accurate, while requiring less memory and less training. He calls for more research into practical applications. (One research direction applies pruning to speech synthesis models. He’s submitted a paper on the topic to next year’s ICASSP conference.) “A second message,” he says, given some of the surprising findings, “is that pruning can be a scientific tool for us to understand these speech models deeper.”

  • How Much Has Quantum Computing Actually Advanced?
    by Dan Garisto on 2. Decembra 2021. at 21:21

    Lately, it seems as though the path to quantum computing has more milestones than there are miles. Judging by headlines, each week holds another big announcement—an advance in qubit size, or another record-breaking investment: First IBM announced a 127-qubit chip. Then QuEra announced a 256-qubit neutral atom quantum computer. There’s now a new behemoth quantum computing company, “Quantinuum” thanks to the merger of Honeywell Quantum Solutions and Cambridge Quantum. And today, Google’s Sycamore announced another leap toward quantum error correction. A curmudgeon might argue that quantum computing is like fusion, or any promising tech whose real rewards are—if even achievable—decades off. The future remains distant, and all the present has for us is smoke, mirrors, and hype. To rebut the cynic, an optimist might point to the glut of top-tier research being done in academia and industry. If there’s new news each week, it’s a sign that sinking hundreds of millions into a really hard problem does actually reap rewards. For a measured perspective on how much quantum computing is actually advancing as a field, we spoke with John Martinis, a professor of physics at the University of California, Santa Barbara, and the former chief architect of Google’s Sycamore. IEEE Spectrum: So it's been about two years since you unveiled results from Sycamore. In the last few weeks, we've seen announcements of a 127-qubit chip from IBM and a 256-qubit neutral atom quantum computer from QuEra. What kind of progress would you say has actually been made? John Martinis: Well, clearly, everyone's working hard to build a quantum computer. And it's great that there are all these systems people are working on. There's real progress. But if you go back to one of the points of the quantum supremacy experiment—and something I've been talking about for a few years now—one of the key requirements is gate errors. I think gate errors are way more important than the number of qubits at this time. It's nice to show that you can make a lot of qubits, but if you don't make them well enough, it's less clear what the advance is. In the long run, if you want to do a complex quantum computation, say with error correction, you need way below 1% gate errors. So it's great that people are building larger systems, but it would be even more important to see data on how well the qubits are working. In this regard, I am impressed with the group in China who reproduced the quantum supremacy results, where they show that they can operate their system well with low errors. John MartinisSpencer Bruttig/UCSB I want to drill down on “scale versus quality,” because I think it's sort of easy for people to understand that 127 qubits is more qubits. Yes, it’s a good advance, but computer companies know all about systems engineering, so you have to also improve reliability by making qubits with lower errors. So I know that Google, and I believe Chris Monroe's group, have both come up with fault tolerance results this year. Could you talk about any of those results? I think it's good that these experiments were done. They're a real advance in the field to be able to do error correction. Unfortunately, I don’t completely agree calling such experiments fault tolerance, as it makes one think like you’ve solved error correction, but in fact it’s just the first step. In the end, you want to do error corrections so that the net logical error [rate] is something like 10-10 to 10-20, and the experiments that were done are nowhere telling you yet that it's possible. Yeah, I think they're like 10-3. It depends how you want to quantify it, but it’s not a huge factor. It could be a bit better if you had more qubits, but you would maybe have to architect it in a different way. I don’t think it is good for the field to oversell results making people think that you're almost there. It's progress, and that's great, but there still is a long way to go I remember that IBM had, once upon a time, touted their quantum volume as a more appropriate universal benchmark. Do you have thoughts about how people can reasonably compare claims between different groups, even using different kinds of qubits? Metrics are needed, but it is important to choose them carefully. Quantum volume is a good metric. But is it really possible to expect something as new and complex as a quantum computer system to be characterized by one metric? You know, you can't even characterize your computer, your cell phone, by one metric. In that case, if there's any metric, it's the price of the cell phone. [laughing] Yeah, that's true. I think it is more realistic at this time to consider a suite of metrics, something that needs to be figured out in the next few years. At this point, building a quantum computer is a systems engineering problem, where you have to get a bunch of components all working well at the same time. Quantum volume is good because it combines several metrics together, but it is not clear they are put together in the best way. And of course if you have a single metric, you tend to optimize to that one metric, which is not necessarily solving the most important systems problems. One of the reasons we did the quantum supremacy experiment was because you had to get everything working well, at the same time, or the experiment would fail. I mean, from my perspective, really the only thing that's been a reliable benchmark—or that I even get to see—is usually some kind of sampling problem, whether it's boson sampling or Gaussian boson sampling. As you said, it’s trying to see: can you actually get a quantum advantage over these classical computers? And then, of course, you have a really interesting debate about whether you can spoof the result. But there's something happening there. It's not just PR. Yeah. You're performing a well-defined experiment, and then you directly compare it to a classical calculation. Boson sampling was the first proposal, and then the Google theory group figured out a way to do an analogous experiment with qubits. For the boson sampling, there’s a nice experiment coming from USTC in China, and there's an interesting debate that says the experiment is constructed in such a way that you can classically compute the results, whereas USTC believes there are higher-order correlations that are hard to compute. It’s great the scientists are learning more about these metrics through this debate. And it’s also been good that various groups have been working on the classical computation part of the Google quantum supremacy experiment. I am still interested whether IBM will actually run their algorithm on a supercomputer to see if it is a practical solution. But the most important result for the quantum supremacy experiment is that we showed there are no additional errors, fundamental or practical, when running a complex quantum computation. So this is good news for the field as we continue to build more powerful machines. It's interesting, because I think there is that real interplay between the theory and the experiment, when you get to this cutting edge stuff, and people aren't quite sure where either side is and both keep making advances forward. For classical computers, there has always been good interplay between theory and experiment. But because of the exponential power of a quantum computer, and because the ideas are still new and untested, we are expecting scientists to continue to be quite inventive. What does the next step look like for quality? You were saying that that's the main roadblock. We are so far from having the kind of fidelity that we need. What is the next step for error correction? What should we be looking for? In the last year Google had a nice paper on error correction for bit flips or phase flips. They understood the experiment well, and discussed what they would have to do for error correction to work well for having both bit and phase at the same time. It has been clear for some time that the major advance is to improve gate errors, and to build superconducting qubits with better coherence. That's also something that I've been thinking about for a couple years. I think it's definitely possible, especially with the latest IBM announcement that they were able to build their 127-qubit device with long coherence times throughout the array. So for example, if you could have this coherence in the more complex architecture of the Google Sycamore processor, you would then have really good gate errors well below 0.1%. This is of course not easy because of systems engineering issues, but it does show that there is a lot of room for improvement with superconducting qubits. IBM's 127-qubit quantum processorIBM You were saying that there is a trade-off between the gate coupling control and the coherence time of the qubit. You think we can overcome that trade-off? Obviously the engineering and the physics pushes against each other. But I think that can be overcome. I'm pretty optimistic about solving this problem. People know how to make good devices at this time, but we probably need to understand all the physics and constraints better, and to be able to predict the coherence accurately. We need more research to improve the technology readiness level of this technology. What would you say is the most overlooked, potential barrier to overcome? I've written about control chips, the elimination of the chandelier of wires, and getting down to something that's actually going to fit inside your dil fridge. I have thought about wiring for about five years now, starting at Google. I can't talk about it, but I think there's a very nice solution here. I think this can be built given a focussed effort. Is there anything we haven't talked about that you think is important for people to know about the state of quantum computing? I think it's a really exciting time to be working on quantum computing, and it’s great that so many talented engineers and scientists are now in the field. In the next few years I think there will be more focus on the systems engineering aspects of building a quantum computer. As an important part of systems engineering is testing, better metrics will have to be developed. The quantum supremacy experiment was interesting as it showed that a powerful quantum computer could be built, and the next step will be to show both a powerful and useful computer. Then the field will really take off. Some kind of standardization. Yes, this will be an important next step. And I think such a suite of standards will help the business community and investors, as they will be better able to understand what developments are happening. Not quite a consumer financial protection bureau, but some kind of business protection for investors. With such a new technology, it is hard to understand how progress is being made. I think we can all work on ways to better communicate how this technology is advancing. I hope this Q&A has helped in this way.

  • AI Training Is Outpacing Moore’s Law
    by Samuel K. Moore on 2. Decembra 2021. at 14:00

    The days and sometimes weeks it took to train AIs only a few years ago was a big reason behind the launch of billions of dollars-worth of new computing startups over the last few years—including Cerebras Systems, Graphcore, Habana Labs, and SambaNova Systems. In addition, Google, Intel, Nvidia and other established companies made their own similar amounts of internal investment (and sometimes acquisition). With the newest edition of the MLPerf training benchmark results, there’s clear evidence that the money was worth it. The gains to AI training performance since MLPerf benchmarks began “managed to dramatically outstrip Moore’s Law,” says David Kanter, executive director of the MLPerf parent organization MLCommons. The increase in transistor density would account for a little more than doubling of performance between the early version of the MLPerf benchmarks and those from June 2021. But improvements to software as well as processor and computer architecture produced a 6.8-11-fold speedup for the best benchmark results. In the newest tests, called version 1.1, the best results improved by up to 2.3 times over those from June. According to Nvidia the performance of systems using A100 GPUs has increased more than 5-fold in the last 18 months and 20-fold since the first MLPerf benchmarks three years ago. For the first time Microsoft entered its Azure cloud AI offerings into MLPerf, muscling through all eight of the test networks using a variety of resources. They ranged in scale from 2 AMD Epyc CPUs and 8 Nvidia A100 GPUs to 512 CPUs and 2048 GPUs. Scale clearly mattered. The top range trained AIs in less than a minute while the two-and-eight combination often needed 20 minutes or more. Moore's Law can only do so much. Software and other progress have made the difference in training AIs.MLCommons Nvidia worked closely with Microsoft on the benchmark tests, and, as in previous MLPerf lists, Nvidia GPUs were the AI accelerators behind most of the entries, including those from Dell, Inspur, and Supermicro. Nvidia itself topped all the results for commercially available systems, relying on the unmatched scale of its Selene AI supercomputer. Selene is made up of commercially available modular DGX SuperPod systems. In its most massive effort, Selene brought to bear 1080 AMD Epyc CPUs and 4320 A100 GPUs to train the natural language processor BERT in less than 16 seconds, a feat that took most smaller systems about 20 minutes. According to Nvidia the performance of systems using A100 GPUs has increased more than 5-fold in the last 18 months and 20-fold since the first MLPerf benchmarks three years ago. That’s thanks to software innovation and improved networks, the company says. (For more, see Nvidia's blog.) Given Nvidia’s pedigree and performance on these AI benchmarks, its natural for new competitors to compare themselves to it. That’s what UK-based Graphcore is doing when it notes that it’s base computing unit the Pod16—1 CPU and 16 IPU accelerators—beats the Nvidia base unit the DGX A100—2 CPUs and 8 GPUs—by nearly a minute. Graphcore brought its bigger systems out to play.Graphcore For this edition of MLPerf, Graphcore debuted image classification and natual language processing benchmarks for its combinations of those base units, the Pod64, Pod128, and (you saw this coming, right?) Pod256. The latter, made up of 32 CPUs and 256 IPUs, was the fourth fastest system behind Nvidia’s Selene and Intel’s Habana Gaudi to finish ResNet image classification training in 3:48. For natural language processing the Pod256 and Pod128 were third and fourth on the list, again behind Selene, finishing in 6:54 and 10:36. (For more see Graphcore's blog.) You might have noticed that the CPU-to-accelerator chip ratios are quite different between Nvidia-based offerings—about 1 to 4—and Graphcore’s systems—as low as 1 to 32. That’s by design, say Graphcore engineers. The IPU is designed to depend less on a CPU’s control when operating neural networks. You can see the opposite with Habana Labs, which Intel purchased for about US $2 billion in 2019. For example, for its high-ranking training on image classification, Intel used 64 Xeon CPUs and 128 Habana Gaudi accelerators to train ResNet in less than 5:30. It used 32 CPUs and 64 accelerators to train the BERT natural language neural net in 11:52. (For more, see Habana's blog.) Google’s contribution to this batch of benchmark scores was a bit unusual. Rather than demonstrate commercial or cloud systems with the company’s TPU v4 processor technology, Google engineers submitted results for two hugely outsized natural language processing neural nets. Using its publicly available TPU v4 cloud, the company ran a version of Lingvo, an NLP with a whopping 480-billion parameters compared to BERT’s 110 million. The cloud platform used 1024 AMD Epyc CPUs and 2048 TPUs to complete the training task in just under 20 hours. Using a research system consisting of 512 AMD Rome CPUs and 1024 TPUs Google trained a 200-billion parameter version of Lingvo in 13.5 hours. (It took 55 hours and 44 hours to do the whole process end-to-end including steps needed to get the training started, Google reports.) Structurally, Lingvo is similar enough to BERT to fit into that category, but it also resembles other really the large conversation AI, such as LaMDA and GPT-3 ,that computing giants have been working on. Google thinks that huge-model training should eventually become a part of future MLPerf commercial benchmarks. (For more, see Google's blog.) However, MLCommons’ Kanter, points out that the expense of training such systems is high enough to exclude many participants.

  • IEEE President’s Note: We’re Committed to Diversity, Equity, and Inclusion
    by Susan K. “Kathy” Land on 1. Decembra 2021. at 19:00

    IEEE’s mission to foster technological innovation and excellence to benefit humanity requires the talents and perspectives of people with different personal, cultural, and technical backgrounds. In support of this mission—and to aid our members and volunteers—it is vital that members have a safe and inclusive place for collegial discourse and that all feel included and that they belong. IEEE reinforced its support for diversity and inclusion in 2019 when the IEEE Board of Directors adopted the following: “IEEE is committed to advancing diversity in the technical profession, and to promoting an inclusive and equitable culture that welcomes, engages, and rewards all who contribute to the field, without regard to race, religion, gender, disability, age, national origin, sexual orientation, gender identity, or gender expression.” Last year the three presidents of IEEE emphasized that commitment with the following: “IEEE is, and remains, strongly committed to diversity, equity, and inclusion and we see no place for hatred and discrimination in our communities.” Both statements reflect IEEE’s longstanding commitment to engage diverse perspectives for the betterment of the engineering profession and ensure a welcoming environment that equitably engages, supports, and recognizes the diverse individuals dedicated to advancing technology for the benefit of humanity. I think it was very important for the organization to make these public statements, as it shows that IEEE believes that embracing diversity and inclusion as organizational values is a way to intentionally increase its ability to listen to, and empower all stakeholders. SUPPORTING DIVERSITY IEEE has supported diversity and inclusion for many years through numerous efforts and programs. A number of committees within IEEE have been doing important work in these areas over the past few years. Along with many other dedicated volunteers, 2019 IEEE President José M. F. Moura and Andrea Goldsmith, chair of the IEEE Ad Hoc Committee on Diversity, Inclusion, and Professional Ethics since its inception in 2019, were key leaders. Building on this momentum, a new website launched this year contains a wealth of information, resources, and tools for members, volunteers, and the broader community. The site highlights ongoing efforts by various IEEE groups that are taking action to foster a diverse, equitable, and welcoming environment. I truly hope this website can help raise awareness of the importance of diversity and inclusion in creating technology to benefit humanity. Another important step in IEEE’s collective journey toward an inclusive and equitable culture includes recent revisions to the IEEE Publications Services and Products Operations Manual. The revisions permit authors to change their preferred name—whether it be due to marriage or divorce, religious conversion, or gender alignment—IEEE will modify the metadata associated with their IEEE publications upon successful validation of the identity of the requesting author. Given our mission, IEEE collaborates globally with all our stakeholders and must seek to maintain an open and inclusive platform for our authors. These revisions recognize the importance that authors place on managing their own name and identity. THE IMPORTANCE OF CODES An organization’s ethics speaks to how it supports diversity and inclusion. I am proud to say that IEEE is ahead of many professional societies in having a code of ethics and code of conduct, both of which were revised last year. These reviews and revisions were necessary because our policies had not been reexamined in many years. The updates incorporate high-level principles such as a commitment not to engage in harassment, and protecting the privacy of others. These changes reflect IEEE’s longstanding commitment to ensuring the engineering profession maximizes its impact and success by welcoming, engaging, and rewarding all those who contribute to the field in an equitable manner. In addition, as part of IEEE’s commitment to meeting the highest standards of integrity, responsibility, and ethical behavior, the IEEE Board of Directors adopted a set of changes to the IEEE Bylaws and Policies to strengthen our Ethics and Member Conduct processes around reporting, mediation, adjudication, appealing, and sanctioning ethical misconduct. The new ethics reporting processes went into effect on 1 April. The primary goals of the changes are to simplify the process for filing reports of misconduct, to increase the transparency of how IEEE handles complaints, and to expand the accessibility of the process and make it more inclusive. The time frame to report professional ethics violations has been increased from two years to five years from the date of the incident. These improvements reinforce the value IEEE places on holding our members and stakeholders to the highest ethical standards. I am extremely proud of the good work that we have been accomplishing across IEEE to ensure that our environments are safe and that our members have collaborative and collegial places that promote the best technical discussions, where all voices are heard. I urge all IEEE’s entities to continue to work together to meet the growing expectations of members and other stakeholders for an inclusive and equitable culture that welcomes, engages, and rewards all who contribute to the field. Please share your thoughts with me at

  • Building Human-Robot Relationships Through Music and Dance
    by Evan Ackerman on 1. Decembra 2021. at 15:29

    There’s no reliably good way of getting a human to trust a robot. Part of the problem is that robots, generally, just do whatever they’ve been programmed to do, and for a human, there’s typically no feeling that the robot is in the slightest bit interested in making any sort of non-functional connection. From a robot’s perspective, humans are fragile ambulatory meatsacks that are not supposed to be touched and who help with tasks when necessary, nothing more. Humans come to trust other humans by forming an emotional connection with them, something that robots are notoriously bad at. An emotional connection obviously doesn’t have to mean love, or even like, but it does mean that there’s some level of mutual understanding and communication and predictability, a sense that the other doesn’t just see you as an object (and vice versa). For robots, which are objects, this is a real challenge, and with funding from the National Science Foundation, roboticists from the Georgia Tech Center for Music Technology have partnered with the Kennesaw State University dance department on a “forest” of improvising robot musicians and dancers who interact with humans to explore creative collaboration and the establishment of human-robot trust. According to the researchers, the FOREST robots and accompanying musical robots are not rigid mimickers of human melody and movement; rather, they exhibit a remarkable level of emotional expression and human-like gesture fluency–what the researchers call “emotional prosody and gesture” to project emotions and build trust. Looking up what “prosody” means will absolutely take you down a Wikipedia black hole, but the term broadly refers to parts of speech that aren’t defined by the actual words being spoken. For example, you could say “robots are smart” and impart a variety of meanings to it depending on whether you say it ironically or sarcastically or questioningly or while sobbing, as I often do. That’s prosody. You can imagine how this concept can extend to movements and gestures as well, and effective robot-to-human interaction will need to account for this. Many of the robots in this performance are already well known, including Shimon, one of Gil Weinberg’s most creative performers. Here’s some additional background about how the performance came together: What I find personally a little strange about all this is the idea of trust, because in some ways, it seems as though robots should be totally trustworthy because they can (in an ideal world) be totally predictable, right? Like, if a robot is programmed to do things X, Y, and Z in that sequence, you don’t have to trust that a robot will do Y after X in the same way that you’d have to trust a human to do so, because strictly speaking the robot has no choice. As robots get more complicated, though, and there’s more expectation that they’ll be able to interact with humans socially, that gap between what is technically predictable (or maybe, predictable after the fact) and what is predictable by the end user can get very, very wide, which is why a more abstract kind of trust becomes increasingly important. Music and dance may not be the way to make that happen for every robot out there, but it’s certainly a useful place to start.

  • Self-Driving Microscopes to Navigate the Nanoscale
    by Dan Garisto on 30. Novembra 2021. at 15:00

    It's difficult to find an area of scientific research where deep learning isn't discussed as the next big thing. Claims abound: deep learning will spot cancers; it will unravel complex protein structures; it will reveal new exoplanets in previously-analyzed data; it will even discover a theory of everything. Knowing what's real and what's just hype isn't always easy. One promising—perhaps even overlooked—area of research in which deep learning is likely to make its mark is microscopy. In spite of new discoveries, the underlying workflow of techniques such as scanning probe microscopy (SPM) and scanning transmission electron microscopy (STEM) has remained largely unchanged for decades. Skilled human operators must painstakingly set up, observe, and analyze samples. Deep learning has the potential to not only automate many of the tedious tasks, but also dramatically speed up the analysis time by homing in on microscopic features of interest. "People usually just look at the image and they identify a few properties of interest," says Maxim Ziatdinov, a researcher at Oak Ridge National Lab in Tennessee. "They basically discard most of the information, because there is just no way to actually extract all the features of interest from the data." With deep learning, Ziatdinov says that it's possible to extract information about the position and type of atomic structures (that would otherwise escape notice) in seconds, opening up a vista of possibilities. It's a twist on the classical dream of doing more with smaller things (most famously expressed in Richard Feynman's "There's Plenty of Room at the Bottom"). Instead of using hardware to improve the resolution of microscopes, software could expand their role in the lab by making them autonomous. "Such a machine will 'understand' what it is looking at and automatically document features of interest," an article in the Materials Research Society Bulletin declares. "The microscope will know what various features look like by referencing databases, or can be shown examples on-the-fly." Despite its micro- prefix, microscopy techniques such as SPM and STEM actually deal with objects on the nano- scale, including individual atoms. In SPM, a nanoscale tip hovers over the sample's surface and, like a record player, traces its grooves. The result is a to an visual image instead of an audio signal. On the other hand, STEM generates an image by showering a sample with electrons and collecting those which pass through, essentially creating a negative. Both microscopy techniques allow researchers to quickly observe the broad structural features of a sample. Researchers like Ziatdinov are interested in the functional properties of certain features such as defects. By applying a stimulus like an electric field to a sample, they can measure how it responds. They can also use the sample's reactions to the applied stimuli to build a functional map of the sample. With automation, researchers could make measurements that have never been accessible. But taking functional data takes time. Zooming in on a structural image to take functional data is time-prohibitive, and human operators have to make educated guesses about which features they are hoping to analyze. There hasn't been a rigorous way to predict functionality from structure, so operators have simply had to get a knack for picking good features. In other words, the cutting edge of microscopy is just as much art as it is science. The hope is that this tedious feature-picking can be outsourced to a neural network that predicts features of interest and navigates to them, dramatically speeding up the process. Automated microscopy is still at the proof-of-concept stage, with a few groups of researchers around the world hammering out the principles and doing preliminary tests. Unlike many areas of deep learning, success here would not be simply automating preexisting measurements; with automation, researchers could make measurements that have heretofore been impossible. Ziatdinov and his colleagues have already made some progress toward such a future. For years, they sat on microscopy data that would reveal details about graphene—a few frames that showed a defect creating strain in the atomically thin material. "We couldn't analyze it, because there's just no way that you can extract positions of all the atoms," Ziatdinov says. But by training a neural net on the graphene, they were able to categorize newly recognized structures on the edges of defects. Microscopy isn't just limited to observing. By blasting samples with a high energy electron beam, researchers can shift the position of atoms, effectively creating an "atomic forge." As with a conventional billows-and-iron forge, automation could make things a lot easier. An atomic forge guided by deep learning could spot defects and fix them, or nudge atoms into place to form intricate structures—around the clock, without human error, sweat, or tears. "If you actually want to have a manufacturing capability, just like with any other kind of manufacturing, you need to be able to automate it," he says. Ziatdinov is particularly interested in applying automated microscopy to quantum devices, like topological qubits. Efforts to reliably create these qubits have not proven successful, but Ziatdinov thinks he might have the answer. By training a neural network to understand the functions associated with specific features, deep learning could unlock which atomic tweaks are needed to create a topological qubit—something humans clearly haven't quite figured out. Benchmarking exactly how far we are from a future where autonomous microscopy helps build quantum devices isn't easy. There are few human operators in the entire world, so it's difficult to compare deep learning results to a human average. It's also unclear which obstacles will pose the biggest problems moving forward in a domain where the difference of a few atoms can be decisive. Researchers are also applying deep learning to microscopy on other scales. Confocal microscopy, which operates at a scale thousands of times larger than SPM and STEM, is an essential technique that gives biologists a window into cells. By integrating new hardware with deep learning software, a team at the Marine Biological Laboratory in Woods Hole, Mass., dramatically improved the resolution of images taken from a variety of samples such as cardiac tissue in mice and cells in fruit fly wings. Critically, deep learning allowed the researchers to use much less of light for imaging, reducing damage to the samples. The conclusion reached by a recent review of the prospects for autonomous microscopy is that it "will enable fundamentally new opportunities and paradigms for scientific discovery. "But it came with the caveat that "this process is likely to be highly nontrivial." Whether deep learning lives up to its promise on the microscopic frontier remains, literally, to be seen.

  • 100 Years Ago RCA Inaugurated Its Global Wireless System
    by Joanna Goodrich on 29. Novembra 2021. at 19:00

    When the Radio Corporation of America was founded in 1919, one of its missions was to become the world’s leading radio communications company. Among RCA’s first steps was to build a center that would transmit international signals and telegraphic messages. In 1921 the company opened Radio Central, in Rocky Point on New York’s Long Island. The transmitting facility was connected to a radiogram-receiving facility in Riverhead—about 24 kilometers away on Long Island—through a system of relays using existing telephone lines. The two locations worked with the Central Traffic Office in New York City to send and receive messages. On 5 November 1921, U.S. President Warren G. Harding made history when he sent a radio message over RCA Radio Central’s relay network from the White House across the world. Acknowledgements of the message were received at the Riverhead facility from Australia, Japan, and 15 other countries. On the 100th anniversary of the sending of that radiogram, RCA Radio Central was commemorated with an IEEE Milestone. The IEEE Long Island Section sponsored the nomination. Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world. RCA Radio Central was home to many other telecommunications advances including folded-wave and rhombic antennas; the original National Broadcasting Co. TV antenna installed atop the Empire State Building, in New York City; and high-frequency vacuum tubes. THE CREATION OF RCA Before the start of World War I the primary radio communications provider in the United States was American Marconi, in Aldene, N.J. The company—a subsidiary of Marconi, in London—provided wireless telegraph services for commercial shipping and transoceanic customers. The U.S. Navy took over operations at the start of the war when American Marconi couldn’t prove that it wasn’t controlled by its British parent, according to the entry for the Radio Central Milestone in the Engineering and Technology History Wiki (ETHW). After the war was over, the Navy relinquished all Marconi facilities, according to the 1944 book The First 25 Years of RCA: A Quarter Century of Radio Progress. To make sure Marconi would be under U.S. control, it was purchased by General Electric. In 1919 GE formed RCA to take control of Marconi’s business, property, patents, transatlantic shipboard stations, and contracts. It also turned over rights to its own radio patents to RCA, according to the book. THE TRANSMISSION CENTER RCA started construction of Radio Central in 1920. It occupied 26 square km and consisted of three facilities: an administration and transmission building, a powerhouse, and a research laboratory. To transmit long-distance radio signals, the ETHW article says, the company installed a pair of 200-kilowatt Alexanderson alternators—50 tonne machines that generated alternating currents. The alternators, which created low-frequency radio waves capable of traveling thousands of kilometers, were shipped to Radio Central from General Electric’s factory in Schenectady, N.Y. They were installed in the administration and transmissions building and were operational until 1948. Although RCA planned to install another 10 machines for US $1.3 million, they were never purchased because the technology became outdated when high-power vacuum tubes were developed. “It was a good thing we didn’t build the rest” of the Alexanderson alternators, RCA engineer Marshall Etter said in a 1992 article about Radio Central in The Antique Radio Gazette. “RCA would’ve been obsolete and in serious financial trouble.” A pond with eight hydrojets was built outside the administration building to cool the alternators. Eighty high-frequency transmitters with power ratings of up to 40 kW were later installed in the building. About 3 km away, in the powerhouse, backup transmitters were used to handle the increased messaging demand. To transmit signals from the Alexanderson alternators, RCA built a flat-top antenna system that stretched for 5 km. It consisted of six 125-meter steel towers that each had a 46-meter crossbeam on top. Mounted 4 meters apart on each crossbeam were 12 parallel wires, each 2,286 meters long. Their loading coils could be tuned to operate at different carrier frequencies and to match the antenna impedance to the transmitter, according to the ETHW article. The antennae’s buried counterpoise wires, which reduced electrical resistance with the ground, produced an effect equivalent to that of a copper plate 610 meters wide and 5 km long. When RCA donated the property to the state of New York in 1977, the towers were demolished. MAKING HISTORY RCA Radio Central in 1922 was the world’s largest and most powerful communications station. It was the heart of a wireless network that included California, Hawaii, Wales, east Asia, Europe, and South and Central America. Radio Central sent the first transoceanic facsimile. In November 1924, the facility sent a photograph of the U.S. president, Calvin Coolidge, to London. Two years later Radio Central began transmitting transoceanic photos for newspapers and weather maps to ships at sea, according to the ETHW entry. Today RCA exists only as a brand name. Sony Music Entertainment and Technicolor own its trademarks. The Milestone plaque is to be displayed in the vestibule of Rocky Point High School, which is located on the former Radio Central grounds. The plaque reads: On 5 November 1921, the world’s most powerful transoceanic radio facility at the time, RCA Radio Central, was inaugurated. Located at Rocky Point and Riverhead, New York, its Alexanderson 220 kW, 18.3 kHz transmitters and Beverage long-wire receiving antennas provided reliable worldwide radio communications. In succeeding years, RCA’s research laboratory also developed diversity radio reception, rhombic and folded-dipole antennas, the first transoceanic single side-band channels, and commercial facsimile service.

  • 6G Is Years Away, but the Power Struggles Have Already Begun
    by Michael Koziol on 29. Novembra 2021. at 16:00

    When wireless researchers or telecom companies talk about future sixth-generation (6G) networks, they're talking mostly about their best guesses and wish lists. There are as yet no widely agreed upon technical standards outlining 6G's frequencies, signal modulations, and waveforms. And yet the economic and political forces that will define 6G are already in play. And here's the biggest wrinkle: Because there are no major U.S. manufacturers of cellular infrastructure equipment, the United States may not have the superpowers it thinks it does in shaping the future course of wireless communications. While many U.S. tech giants will surely be involved in 6G standards development, none of those companies make the equipment that will comprise the network. Companies like Ericsson (Sweden), Nokia (Finland), Samsung (South Korea), and Huawei (China) build the radio units, baseband units, and other hardware and software that go into cell towers and the wired networks that connect them. As one example, equipment manufacturers (such as China's Huawei and ZTE) will probably push for standards that prioritize the distance a signal can travel, while minimizing the interference it experiences along the way. Meanwhile, device makers (like U.S. heavyweights Apple and Alphabet) will have more stake in standardizing signal modulations that drain their gadgets' batteries the least. How such squabbles might be resolved, of course, is still an open question. But now is arguably the best time to begin asking it. 6G is—and isn't yet—around the corner. When the Global Communications Conference (Globecom) begins in Madrid this December, attending researchers and telecom executives will find it features no fewer than five workshops devoted to 6G development. Compare that to the 2020 iteration of the IEEE Communication Society's annual conference, which—pandemic not withstanding—included nothing 6G related beyond a 4-hour summit on the topic. And if you step back one year further to Globecom 2019, you'll find that 6G was limited to a single technical talk. Cellular standards are developed and overseen by a global cellular industry consortium, the 3rd Generation Partnership Project (3GPP). Past wireless generations coalesced around universally agreed-upon standards relatively smoothly. But early research into 6G is emerging in a more tense geopolitical environment, and the quibbles that arose during 5G's standardization could blossom into more serious disagreements this time around. At the moment, says Mehdi Bennis, a professor of wireless communications at the University of Oulu, in Finland, home of the 6G Flagship research initiative, the next generation of wireless standards is quite open ended. "Nobody has a clear idea. We maybe have some pointers." To date, 6G has been discussed in terms of applications (including autonomous vehicles and holographic displays) and research interests—such as terahertz waves and spectrum sharing. So for the next few years, whenever a so-called "6G satellite" is launched, for example, take it with a grain of salt: It just means someone is testing technologies that may make their way into the 6G standards down the line. But such tests, although easily overhyped and used to set precedents and score points, are still important. The reason each generation of wireless—2G, 3G, 4G, and now 5G—has been so successful is because each has been defined by standards that have been universally implemented. In other words, a network operator in the United States like AT&T can buy equipment from Swedish manufacturer Ericsson to build its cellular network, and everything will function with phones made in China because they're drawing on the same set of agreed-upon standards. (Unfortunately however, you'll still run into problems if you try to mix and match infrastructure equipment from different manufacturers.) 5G and its predecessors have been successful because they've been universally implemented. 6G still has time to congeal—or not. In 2016, as the standards were being sorted out for 5G, a clash emerged in trying to decide what error-correcting technique would be used for wireless signals. Qualcomm, based in San Diego, and other companies pushed for low-density parity checks (LDPC), which had been first described decades earlier but had yet to materialize commercially. Huawei, backed by other Chinese companies, pushed for a new technique in which it had invested a significant amount of time and energy that it called polar codes. A deadlock at the 3GPP meeting that November resulted in a split standard: LDPC would be used for radio channels that send user data and polar codes for channels that coordinated those user-data channels. That Huawei managed to take polar codes from a relatively unknown mathematical theory and almost single-handedly develop it into a key component of 5G led to some proclamations that the company (and by extension, China) was winning the battle for 5G development. The implicit losers were Europe and the United States. The incident made at least one thing abundantly clear: There is a lot of money, prestige, and influence in the offing for a company that gets the tech it's been championing into the standards. In May 2019, the U.S. Bureau of Industry and Security added Huawei to its Entity List—which places requirements on, or prohibits, importing and exporting items. Sources that IEEE Spectrum spoke to noted how the move increased tensions in the wireless industry, echoing concerns from 2019. "We are already seeing the balkanization of technology in many domains. If this trend continues, companies will have to create different products for different markets, leading to even further divergence," Zvika Krieger, the head of technology policy at the World Economic Forum told MIT Technology Review at the time of the ban. The move curtailed the success Huawei originally saw from its 5G standards wins, with the rotating chairman, Eric Xu, recently saying that the company's cellphone revenue will drop by US $30 billion to $40 billion this year from a reported $136.7 billion in 2020. As fundamental research continues into what technologies and techniques will be implemented in 6G, it's too early to say what the next generation's version of polar codes will be, if any. But already, different priorities are emerging in the values that companies and governments in different parts of the world want to see enshrined in any standards to come. "There are some unique, or at least stronger, views on things like personal liberty, data security, and privacy in Europe, and if we wish our new technologies to support those views, it needs to be baked into the technology," said Colin Willcock, the chairman of the board for the Europe-based 6G Smart Networks and Services Industry Association, speaking at the Brooklyn 6G Summit in October. Bennis agrees: "In Europe, we're very keen on privacy, that's a big, big, I mean, big requirement." Bennis notes that privacy is being built into 5G "a posteriori" as researchers tack it onto the established standards. The European Union has previously passed laws protecting personal data and privacy such as the General Data Protection Regulation (GDPR). So how will concepts like privacy, security, or sustainability be embedded in 6G—if at all? For instance, one future version of 6G could include differential privacy, in which data-set patterns are shared without sharing individual data points. Or it could include federated learning, a machine learning technique that instead of being trained on a centralized data set uses one scattered across multiple locations—thereby effectively anonymizing information that malicious actors in a network might otherwise put to nefarious purposes. These techniques are already being implemented in 5G networks by researchers, but integrating them into 6G standards would give them more weight. The Washington, D.C.–based Alliance for Telecommunications Industry Solutions launched the Next G Alliance in October 2020 to strengthen U.S. technological leadership in 6G over the course of the next decade. Mike Nawrocki, the alliance's managing director, says the alliance is taking a "holistic" approach to 6G's development. "We're really trying to look at it from the perspective of what are some of the big societal drivers that we would envision for the end of the decade," Nawrocki says, citing as one example the need to connect industries previously unconcerned with the wireless industry such as health care and agriculture. If different regions—the United States, Europe, China, Japan, South Korea, and so on—find themselves at loggerheads about how to define certain standards or support incompatible policies about the implementations or applications of 6G networks, global standards could ultimately, in a worst-case scenario, disintegrate. Different countries could decide it's easier to go it alone and develop their own 6G technologies without global cooperation. This would result in balkanized wireless technologies around the world. Smartphone users in China might find their phones unable to connect with any other wireless network outside their country's borders. Or, for instance, AT&T might, in such a scenario, no longer buy equipment from Nokia because it's incompatible with AT&T 's network operations. Although that's a dire outcome, the industry consensus is that it's not likely yet—but certainly more plausible than for any other wireless generation. This article appears in the December 2021 print issue as "Geopolitics Is Already Shaping 6G."

  • Can This DIY Rocket Program Send an Astronaut to Space?
    by Mads Stenfatt on 28. Novembra 2021. at 16:00

    It was one of the prettiest sights I have ever seen: our homemade rocket floating down from the sky, slowed by a white-and-orange parachute that I had worked on during many nights at the dining room table. The 6.7-meter-tall Nexø II rocket was powered by a bipropellant engine designed and constructed by the Copenhagen Suborbitals team. The engine mixed ethanol and liquid oxygen together to produce a thrust of 5 kilonewtons, and the rocket soared to a height of 6,500 meters. Even more important, it came back down in one piece. That successful mission in August 2018 was a huge step toward our goal of sending an amateur astronaut to the edge of space aboard one of our DIY rockets. We're now building the Spica rocket to fulfill that mission, and we hope to launch a crewed rocket about 10 years from now. Copenhagen Suborbitals is the world's only crowdsourced crewed spaceflight program, funded to the tune of almost US $100,000 per year by hundreds of generous donors around the world. Our project is staffed by a motley crew of volunteers who have a wide variety of day jobs. We have plenty of engineers, as well as people like me, a pricing manager with a skydiving hobby. I'm also one of three candidates for the astronaut position. We're in a new era of spaceflight: The national space agencies are no longer the only game in town, and space is becoming more accessible. Rockets built by commercial players like Blue Origin are now bringing private citizens into orbit. That said, Blue Origin, SpaceX, and Virgin Galactic are all backed by billionaires with enormous resources, and they have all expressed intentions to sell flights for hundreds of thousands to millions of dollars. Copenhagen Suborbitals has a very different vision. We believe that spaceflight should be available to anyone who's willing to put in the time and effort. Copenhagen Suborbitals was founded in 2008 by a self-taught engineer and a space architect who had previously worked for NASA. From the beginning, the mission was clear: crewed spaceflight. Both founders left the organization in 2014, but by then the project had about 50 volunteers and plenty of momentum. The group took as its founding principle that the challenges involved in building a crewed spacecraft on the cheap are all engineering problems that can be solved, one at a time, by a diligent team of smart and dedicated people. When people ask me why we're doing this, I sometimes answer, "Because we can." Volunteers use a tank of argon gas [left] to fill a tube within which engine elements are fused together. The team recently manufactured a fuel tank for the Spica rocket [right] in their workshop. Our goal is to reach the Kármán line, which defines the boundary between Earth's atmosphere and outer space, 100 kilometers above sea level. The astronaut who reaches that altitude will have several minutes of silence and weightlessness after the engines cut off and will enjoy a breathtaking view. But it won't be an easy ride. During the descent, the capsule will experience external temperatures of 400 °C and g-forces of 3.5 as it hurtles through the air at speeds of up to 3,500 kilometers per hour. I joined the group in 2011, after the organization had already moved from a maker space inside a decommissioned ferry to a hangar near the Copenhagen waterfront. Earlier that year, I had watched Copenhagen Suborbital's first launch, in which the HEAT-1X rocket took off from a mobile launch platform in the Baltic Sea—but unfortunately crash-landed in the ocean when most of its parachutes failed to deploy. I brought to the organization some basic knowledge of sports parachutes gained during my years of skydiving, which I hoped would translate into helpful skills. The team's next milestone came in 2013, when we successfully launched the Sapphire rocket, our first rocket to include guidance and navigation systems. Its navigation computer used a 3-axis accelerometer and a 3-axis gyroscope to keep track of its location, and its thrust-control system kept the rocket on the correct trajectory by moving four servo-mounted copper jet vanes that were inserted into the exhaust assembly. We believe that spaceflight should be available to anyone who's willing to put in the time and effort. The HEAT-1X and the Sapphire rockets were fueled with a combination of solid polyurethane and liquid oxygen. We were keen to develop a bipropellant rocket engine that mixed liquid ethanol and liquid oxygen, because such liquid-propellant engines are both efficient and powerful. The HEAT-2X rocket, scheduled to launch in late 2014, was meant to demonstrate that technology. Unfortunately, its engine went up in flames, literally, in a static test firing some weeks before the scheduled launch. That test was supposed to be a controlled 90-second burn; instead, because of a welding error, much of the ethanol gushed into the combustion chamber in just a few seconds, resulting in a massive conflagration. I was standing a few hundred meters away, and even from that distance I felt the heat on my face. The HEAT-2X rocket's engine was rendered inoperable, and the mission was canceled. While it was a major disappointment, we learned some valuable lessons. Until then, we'd been basing our designs on our existing capabilities—the tools in our workshop and the people on the project. The failure forced us to take a step back and consider what new technologies and skills we would need to master to reach our end goal. That rethinking led us to design the relatively small Nexø I and Nexø II rockets to demonstrate key technologies such as the parachute system, the bipropellant engine, and the pressure regulation assembly for the tanks. For the Nexø II launch in August 2018, our launch site was 30 km east of Bornholm, Denmark's easternmost island, in a part of the Baltic Sea used by the Danish navy for military exercises. We left Bornholm's Nexø harbor at 1 a.m. to reach the designated patch of ocean in time for a 9 a.m. launch, the time approved by Swedish air traffic control. (While our boats were in international waters, Sweden has oversight of the airspace above that part of the Baltic Sea.) Many of our crew members had spent the entire previous day testing the rocket's various systems and got no sleep before the launch. We were running on coffee. When the Nexø II blasted off, separating neatly from the launch tower, we all cheered. The rocket continued on its trajectory, jettisoning its nose cone when it reached its apogee of 6,500 meters, and sending telemetry data back to our mission control ship all the while. As it began to descend, it first deployed its ballute, a balloon-like parachute used to stabilize spacecraft at high altitudes, and then deployed its main parachute, which brought it gently down to the ocean waves. In 2018, the Nexø II rocket launched successfully [left] and returned safely to the Baltic Sea [right]. The launch brought us one step closer to mastering the logistics of launching and landing at sea. For this launch, we were also testing our ability to predict the rocket's path. I created a model that estimated a splashdown 4.2 km east of the launch platform; it actually landed 4.0 km to the east. This controlled water landing—our first under a fully inflated parachute—was an important proof of concept for us, since a soft landing is an absolute imperative for any crewed mission. This past April, the team tested its new fuel injectors in a static engine test. Carsten Olsen The Nexø II's engine, which we called the BPM5, was one of the few components we hadn't machined entirely in our workshop; a Danish company made the most complicated engine parts. But when those parts arrived in our workshop shortly before the launch date, we realized that the exhaust nozzle was a little bit misshapen. We didn't have time to order a new part, so one of our volunteers, Jacob Larsen, used a sledgehammer to pound it into shape. The engine didn't look pretty—we nicknamed it the Franken-Engine—but it worked. Since the Nexø II's flight, we've test-fired that engine more than 30 times, sometimes pushing it beyond its design limits, but we haven't killed it yet. The Spica astronaut's 15-minute ride to the stars will be the product of more than two decades of work. That mission also demonstrated our new dynamic pressure regulation (DPR) system, which helped us control the flow of fuel into the combustion chamber. The Nexø I had used a simpler system called pressure blowdown, in which the fuel tanks were one-third filled with pressurized gas to drive the liquid fuel into the chamber. With DPR, the tanks are filled to capacity with fuel and linked by a set of control valves to a separate tank of helium gas under high pressure. That setup lets us regulate the amount of helium gas flowing into the tanks to push fuel into the combustion chamber, enabling us to program in different amounts of thrust at different points during the rocket's flight. The 2018 Nexø II mission proved that our design and technology were fundamentally sound. It was time to start working on the human-rated Spica rocket. Copenhagen Suborbitals hopes to send an astronaut aloft in its Spica rocket in about a decade. Caspar Stanley With its crew capsule, the Spica rocket will measure 13 meters high and will have a gross liftoff weight of 4,000 kilograms, of which 2,600 kg will be fuel. It will be, by a significant margin, the largest rocket ever built by amateurs. The Spica rocket will use the BPM100 engine, which the team is currently manufacturing. Thomas Pedersen Its engine, the 100-kN BPM100, uses technologies we mastered for the BPM5, with a few improvements. Like the prior design, it uses regenerative cooling in which some of the propellant passes through channels around the combustion chamber to limit the engine's temperature. To push fuel into the chamber, it uses a combination of the simple pressure blowdown method in the first phase of flight and the DPR system, which gives us finer control over the rocket's thrust. The engine parts will be stainless steel, and we hope to make most of them ourselves out of rolled sheet metal. The trickiest part, the double-curved "throat" section that connects the combustion chamber to the exhaust nozzle, requires computer-controlled machining equipment that we don't have. Luckily, we have good industry contacts who can help out. One major change was the switch from the Nexø II's showerhead-style fuel injector to a coaxial-swirl fuel injector. The showerhead injector had about 200 very small fuel channels. It was tough to manufacture, because if something went wrong when we were making one of those channels—say, the drill got stuck—we had to throw the whole thing away. In a coaxial-swirl injector, the liquid fuels come into the chamber as two rotating liquid sheets, and as the sheets collide, they're atomized to create a propellant that combusts. Our swirl injector uses about 150 swirler elements, which are assembled into one structure. This modular design should be easier to manufacture and test for quality assurance. The BPM100 engine will replace an old showerhead-style fuel injector [right] with a coaxial-swirl injector [left], which will be easier to manufacture.Thomas Pedersen In April of this year, we ran static tests of several types of injectors. We first did a trial with a well-understood showerhead injector to establish a baseline, then tested brass swirl injectors made by traditional machine milling as well as steel swirl injectors made by 3D printing. We were satisfied overall with the performance of both swirl injectors, and we're still analyzing the data to determine which functioned better. However, we did see some combustion instability—namely, some oscillation in the flames between the injector and the engine's throat, a potentially dangerous phenomenon. We have a good idea of the cause of these oscillations, and we're confident that a few design tweaks can solve the problem. Volunteer Jacob Larsen holds a brass fuel injector that performed well in a 2021 engine test.Carsten Olsen We'll soon commence building a full-scale BPM100 engine, which will ultimately incorporate a new guidance system for the rocket. Our prior rockets, within their engines' exhaust nozzles, had metal vanes that we would move to change the angle of thrust. But those vanes generated drag within the exhaust stream and reduced effective thrust by about 10 percent. The new design has gimbals that swivel the entire engine back and forth to control the thrust vector. As further support for our belief that tough engineering problems can be solved by smart and dedicated people, our gimbal system was designed and tested by a 21-year-old undergraduate student from the Netherlands named Jop Nijenhuis, who used the gimbal design as his thesis project (for which he got the highest possible grade). We're using the same guidance, navigation, and control (GNC) computers that we used in the Nexø rockets. One new challenge is the crew capsule; once the capsule separates from the rocket, we'll have to control each part on its own to bring them both back down to Earth in the desired orientation. When separation occurs, the GNC computers for the two components will need to understand that the parameters for optimal flight have changed. But from a software point of view, that's a minor problem compared to those we've solved already. Bianca Diana works on a drone she's using to test a new guidance system for the Spica rocket.Carsten Olsen My specialty is parachute design. I've worked on the ballute, which will inflate at an altitude of 70 km to slow the crewed capsule during its high-speed initial descent, and the main parachutes, which will inflate when the capsule is 4 km above the ocean. We've tested both types by having skydivers jump out of planes with the parachutes, most recently in a 2019 test of the ballute. The pandemic forced us to pause our parachute testing, but we should resume soon. For the parachute that will deploy from the Spica's booster rocket, the team tested a small prototype of a ribbon parachute.Mads Stenfatt For the drogue parachute that will deploy from the booster rocket, my first prototype was based on a design called Supersonic X, which is a parachute that looks somewhat like a flying onion and is very easy to make. However, I reluctantly switched to ribbon parachutes, which have been more thoroughly tested in high-stress situations and found to be more stable and robust. I say "reluctantly" because I knew how much work it would be to assemble such a device. I first made a 1.24-meter-diameter parachute that had 27 ribbons going across 12 panels, each attached in three places. So on that small prototype, I had to sew 972 connections. A full-scale version will have 7,920 connection points. I'm trying to keep an open mind about this challenge, but I also wouldn't object if further testing shows the Supersonic X design to be sufficient for our purposes. We've tested two crew capsules in past missions: the Tycho Brahe in 2011 and the Tycho Deep Space in 2012. The next-generation Spica crew capsule won't be spacious, but it will be big enough to hold a single astronaut, who will remain seated for the 15 minutes of flight (and for two hours of preflight checks). The first spacecraft we're building is a heavy steel "boilerplate" capsule, a basic prototype that we're using to arrive at a practical layout and design. We'll also use this model to test hatch design, overall resistance to pressure and vacuum, and the aerodynamics and hydrodynamics of the shape, as we want the capsule to splash down into the sea with minimal shock to the astronaut inside. Once we're happy with the boilerplate design, we'll make the lightweight flight version. Copenhagen Suborbitals currently has three astronaut candidates for its first flight: from left, Mads Stenfatt, Anna Olsen, and Carsten Olsen. Mads Stenfatt Three members of the Copenhagen Suborbitals team are currently candidates to be the astronaut in our first crewed mission—me, Carsten Olsen, and his daughter, Anna Olsen. We all understand and accept the risks involved in flying into space on a homemade rocket. In our day-to-day operations, we astronaut candidates don't receive any special treatment or training. Our one extra responsibility thus far has been sitting in the crew capsule's seat to check its dimensions. Since our first crewed flight is still a decade away, the candidate list may well change. As for me, I think there's considerable glory in just being part of the mission and helping to build the rocket that will bring the first amateur astronaut into space. Whether or not I end up being that astronaut, I'll forever be proud of our achievements. The astronaut will go to space inside a small crew capsule on the Spica rocket. The astronaut will remain seated for the 15-minute flight (and for the 2-hour flight check before). Carsten Brandt People may wonder how we get by on a shoestring budget of about $100,000 a year—particularly when they learn that half of our income goes to paying rent on our workshop. We keep costs down by buying standard off-the-shelf parts as much as possible, and when we need custom designs, we're lucky to work with companies that give us generous discounts to support our project. We launch from international waters, so we don't have to pay a launch facility. When we travel to Bornholm for our launches, each volunteer pays his or her own way, and we stay in a sports club near the harbor, sleeping on mats on the floor and showering in the changing rooms. I sometimes joke that our budget is about one-tenth what NASA spends on coffee. Yet it may well be enough to do the job. We had intended to launch Spica for the first time in the summer of 2021, but our schedule was delayed by the COVID-19 pandemic, which closed our workshop for many months. Now we're hoping for a test launch in the summer of 2022, when conditions on the Baltic Sea will be relatively tame. For this preliminary test of Spica, we'll fill the fuel tanks only partway and will aim to send the rocket to a height of around 30 to 50 km. If that flight is a success, in the next test, Spica will carry more fuel and soar higher. If the 2022 flight fails, we'll figure out what went wrong, fix the problems, and try again. It's remarkable to think that the Spica astronaut's eventual 15-minute ride to the stars will be the product of more than two decades of work. But we know our supporters are counting down until the historic day when an amateur astronaut will climb aboard a homemade rocket and wave goodbye to Earth, ready to take a giant leap for DIY-kind. A Note on Safety One reason that Copenhagen Suborbitals has advanced quite slowly toward its ultimate goal of crewed spaceflight is our focus on safety. We test our components extensively; for example, we tested the engine that powered the 2016 Nexø I rocket about 30 times before the launch. When we plan and execute launches, our bible is a safety manual from the Wallops Flight Facility, part of NASA's Goddard Space Flight Center. Before each launch, we run simulations of the flight profile to ensure there's no risk of harm to our crew, our boats, and any other people or property. We launch from the sea to further reduce the chance that our rockets will damage anyone or anything. We recognize that our human-rated spacecraft, the Spica rocket and crew capsule, must meet a higher bar for safety than anything we've built before. But we must be honest about our situation: If we set the bar too high, we'll never finish the project. We can't afford to test our systems to the extent that commercial companies do (that's why we'll never sell rides on our rockets). Each astronaut candidate understands these risks. Speaking as one of those candidates, I'd feel confident enough to climb aboard if each of my friends who worked on the rocket can look me in the eyes and say, "Yes, we're ready." —M.S. This article appears in the December 2021 print issue as "The First Crowdfunded Astronaut." A Skydiver Who Sews HENRIK JORDAHN Mads Stenfatt first contacted Copenhagen Suborbitals with some constructive criticism. In 2011, while looking at photos of the DIY rocketeers' latest rocket launch, he had noticed a camera mounted close to the parachute apparatus. Stenfatt sent an email detailing his concern—namely, that a parachute's lines could easily get tangled around the camera. "The answer I got was essentially, 'If you can do better, come join us and do it yourself,' " he remembers. That's how he became a volunteer with the world's only crowdfunded crewed spaceflight program. As an amateur skydiver, Stenfatt knew the basic mechanics of parachute packing and deployment. He started helping Copenhagen Suborbitals design and pack parachutes, and a few years later he took over the job of sewing the chutes as well. He had never used a sewing machine before, but he learned quickly over nights and weekends at his dining room table. One of his favorite projects was the design of a high-altitude parachute for the Nexø II rocket, launched in 2018. While working on a prototype and puzzling over the design of the air intakes, he found himself on a Danish sewing website looking at brassiere components. He decided to use bra underwires to stiffen the air intakes and keep them open, which worked quite well. Though he eventually went in a different design direction, the episode is a classic example of the Copenhagen Suborbitals ethos: Gather inspiration and resources from wherever you find them to get the job done. Today, Stenfatt serves as lead parachute designer, frequent spokesperson, and astronaut candidate. He also continues to skydive in his spare time, with hundreds of jumps to his name. Having ample experience zooming down through the sky, he's intently curious about what it would feel like to go the other direction.

  • What the Well-Dressed Spacecraft Will Be Wearing
    by Juliana Cherston on 27. Novembra 2021. at 16:00

    This coming February, the Cygnus NG-17 spacecraft will launch from NASA Wallops, in Virginia, on a routine resupply mission to the International Space Station. Amid the many tonnes of standard crew supplies, spacewalk equipment, computer hardware, and research experiments will be one unusual package: a pair of electronic textile swatches embedded with impact and vibration sensors. Soon after the spacecraft's arrival at the ISS, a robotic arm will mount the samples onto the exterior of Alpha Space's Materials ISS Experiment (MISSE) facility, and control-room operators back on Earth will feed power to the samples. For the next six months, our team will conduct the first operational test of sensor-laden electronic fabrics in space, collecting data in real time as the sensors endure the harsh weather of low Earth orbit. We also hope that microscopic dust or debris, traveling at least an order of magnitude faster than sound, will strike the fabric and trigger the sensors. Our eventual aim is to use such smart electronic textiles to study cosmic dust, some of which has interplanetary or even interstellar origins. Imagine if the protective fabric covering a spacecraft could double as an astrophysics experiment, but without adding excessive mass, volume, or power requirements. What if this smart skin could also measure the cumulative damage caused by orbital space debris and micrometeoroids too small to be tracked by radar? Could sensored textiles in pressured spacesuits give astronauts a sense of touch, as if the fabric were their own skin? In each case, electronic fabrics sensitive to vibrations and charge could serve as a foundational technology. Already, engineered fabrics serve crucial functions here on Earth. Geotextiles made of synthetic polymers are buried deep underground to strengthen land embankments. Surgical meshes reinforce tissue and bone during invasive medical procedures. In space, the outer walls of the ISS are wrapped in a protective engineered textile that gives the station its white color. Called Beta cloth, the woven fabric covers the station's metal shell and shields the spacecraft from overheating and erosion. Beta cloth can also be found on the exterior of Apollo-era spacesuits and Bigelow Aerospace's next-generation inflatable habitats. Until it is possible to substantially alter the human body itself, resilient textiles like this will continue to serve as a crucial boundary—a second skin—protecting human explorers and spacecraft from the extremes of space. Now it's time to bring some smarts to this skin. Juliana Cherston prepares a smart-fabric system in the clean room at Alpha Space in Houston [top]. Electronics in the silver flight hardware box [bottom] stream data to the computer in the blue box. The system, set for launch in February, will be mounted on the Materials ISS Experiment facility.Allison Goode/Aegis Aerospace Our lab, the Responsive Environments Group at MIT, has been working for well over a decade on embedding distributed sensor networks into flexible substrates. In 2018, we were knee-deep in developing a far-out concept to grapple an asteroid with an electronic web, which would allow a network of hundreds or thousands of tiny robots to crawl across the surface as they characterized the asteroid's materials. The technology was curious to contemplate but unlikely to be deployed anytime soon. During a visit to our lab, Hajime Yano, a planetary scientist at the Japan Aerospace Exploration Agency's Institute of Space and Astronautical Science, suggested a nearer-term possibility: to turn the Beta cloth blanket used on long-duration spacecraft into a science experiment. Thus began a collaboration that has so far resulted in multiple rounds of prototyping and ground testing and two experiments in space. One of the tests is the upcoming launch aboard the Cygnus NG-17, funded by the ISS National Laboratory. As the ISS orbits Earth, and the local space environment changes, we'll be triggering our sensors with known excitations to measure how their sensitivity varies over time. Concurrently, we'll take impedance measurements, which will let us peek into the internal electrical properties of the fibers. Any changes to the protective capabilities of the Beta fabric will be picked up using temperature sensors. If the system functions as designed, we may even detect up to 20 micrometeoroid impacts across the fabric's 10-by-10-centimeter area. A triggering system will flag any interesting data to be streamed to Earth in real time. A second in-space experiment is already underway. For more than a year, a wider range of our smart-fabric swatches has been quietly tucked away on a different section of the ISS's walls, on Space BD's Exposed Experiment Handrail Attachment Mechanism (ExHAM) facility. In this experiment, funded by the MIT Media Lab Space Exploration Initiative, the samples aren't being powered. Instead, we're monitoring their exposure to the space environment, which can be tough on materials. They endure repeated cycles of extreme heat and cold, radiation, and material-eroding atomic oxygen. Through real-time videography sessions we've been conducting with the Japan Aerospace Exploration Agency (JAXA), we've already seen signs of some anticipated discoloration of our samples. Once the samples return to Earth in late January via the SpaceX CRS-24 rocket, we'll conduct a more thorough evaluation of the fabrics' sensor performance. A video inspection shows sensored fabrics mounted on the Exposed Experiment Handrail Attachment Mechanism (ExHAM) facility on the International Space Station. The experiment, which began in October 2020, is studying the resiliency of different types of fabric sensors when they're exposed to the harsh environment of low Earth orbit. JAXA/Space BD By demonstrating how to sleekly incorporate sensors into mission-critical subsystems, we hope to encourage the widespread adoption of electronic textiles as scientific instrumentation. Electronic textiles got an early and auspicious start in space. In the 1960s, the software for the Apollo guidance computer was stored in a woven substrate called core rope memory. Wires were fed through conductive loops to indicate 1s and around loops to indicate 0s, achieving a memory density of 72 kilobytes per cubic foot (or about 2,500 kilobytes per cubic meter). Around the same time, a company called Woven Electronics (now part of Collins Aerospace) began developing fabric circuit board prototypes that were considered well ahead of their time. For a fleeting moment in computing, woven fabric circuits and core rope memory were competitive with silicon semiconductor technology. Electronic fabrics then fell into a long hiatus, until interest in wearable technology in the 1990s revived the idea. Our group pioneered some early prototypes, working, for instance, with Levi's in the late '90s on a jean jacket with an embroidered MIDI keyboard. Since then, researchers and companies have created a plethora of sensing technologies in fabric, especially for health-related wearables, like flexible sensors worn on the skin that monitor your well-being through your sweat, heart rate, and body temperature. More recently, sophisticated fiber sensors have been pushing the performance and capabilities of electronic textiles even further. Our collaborators in the Fibers@MIT group, for example, use a manufacturing technique called thermal drawing, in which a centimeter-thick sandwich of materials is heated and stretched to submillimeter thickness, like pulling a multicolored taffy. Incredibly, the internal structure of the resulting fiber remains highly precise, yielding functional devices such as sensors for vibration, light, and temperature that can be woven directly into fabrics. To make a piezoelectric fiber sensor, researchers at the Fibers@MIT group sandwich materials together and then heat and stretch them like taffy. The faint copper wires are used to make electrical contact with the materials inside the fiber. The fibers can then be woven into Beta cloth.Bob O'Connor But this exciting progress hasn't yet made its way to space textiles. Today's spacesuits aren't too different from the one that Alan Shepard wore inside Freedom 7 in 1961. Recent suit designs have instead focused on improving the astronaut's mobility and temperature regulation. They might have touch-screen-compatible fingertips, but that's about as sophisticated as the functionality gets. Meanwhile, Beta cloth has been used on space habitats in more or less its present form for more than a half century. A smattering of fabric antennas and fiber-optic strain sensors have been developed for rigid composites. But little has been done to add electronic sensory function to the textiles we use in space. To jump-start this research, our group has tackled three areas: We've built fabric sensors, we've worked with specialized facilities to obtain a baseline of the materials' sensitivity to impact, and we've designed instrumentation to test these fabrics in space. We started by upgrading Beta cloth, which is a Teflon-impregnated fabric made of flexible fiberglass filaments that are so densely woven that the material feels almost like a thick sheet of paper. To this protective layer, we wanted to add the ability to detect the tiny submillimeter or micrometer-scale impacts from cosmic dust. These microparticles move fast, at speeds of up to 50 kilometers per second, with an average speed of around 10 km/s. A 10-micrometer iron-dominant particle traveling at that speed contains about 75 microjoules of kinetic energy. It isn't much energy, but it can still carry quite a punch when concentrated to a small impact area. Studying the kinematics and spatial distributions of such impacts can give scientists insight into the composition and origins of cosmic dust. What's more, these impacts can cause significant damage to spacecraft, so we'd like to measure how frequent and energetic they are. A replica of the smart-fabric payload that's launching in February shows the electronics and internal layers.Bob O'Connor What kind of fabric sensors would be sensitive enough to pick up the signals from these minuscule impacts? Early on, we settled on using piezoelectric fibers. Piezoelectric materials produce surface charge when subject to mechanical deformation. When a piezoelectric layer is sandwiched between two electrodes, it forms a sensor that can translate mechanical vibration into current. Piezoelectric impact sensors have been used on spacecraft before, but never as part of a fabric or as dispersed fibers. One of the chief requirements for piezoelectrics is that the electric dipoles inside the material must all be lined up in order for the charge to accumulate. To permanently align the dipoles—a process called poling—we have to apply a substantial electric field of about 100 kilovolts for every millimeter of thickness. Early on, we experimented with weaving bare polyvinylidene difluoride yarn into Beta cloth. This single-material yarn has the advantage of being as fine and flexible as the fibers in clothing and is also radiation- and abrasion-resistant. Plus, the fiber-drawing process creates a crystalline phase structure that encourages poling. Applying a hefty voltage to the fabric, though, caused any air trapped in the porous material to become electrically conductive, inducing miniature lightning bolts across the material and spoiling the poling process. We tried a slew of tricks to minimize the arcing, and we tested piezoelectric ink coatings applied to the fabric. Imagine if the protective fabric covering a spacecraft could double as an astrophysics experiment, but without adding excessive mass, volume, or power requirements. Ultimately, though, we determined that multimaterial fiber sensors were preferable to single-material yarns, because the dipole alignment needs to occur only across the very tiny and precise distances within each fiber sensor, rather than across a fabric's thickness or across a fabric coating's uneven surface. We chose two different fiber sensors. One of the fibers is a piezoceramic nanocomposite fiber designed by Fibers@MIT, and the other is a polymer we harvested from commercial piezoelectric cabling, then modified to be suitable for fabric integration. We coated these fiber sensors in an elastomeric conductive ink, as well as a white epoxy that keeps the fibers cool and resists oxidation. To produce our fabric, we worked with space-textile manufacturer JPS Composite Materials, in Anderson, S.C. The company helped insert our two types of piezoelectric fibers at intervals across the fabric and ensured that our version of Beta cloth still adhered to NASA specifications. We have also worked with the Rhode Island School of Design on fabric manufacturing. The green laser in the Laser-Induced Particle Impact Test facility at MIT's Institute for Soldier Nanotechnologies accelerates particles to supersonic speeds.Bob O'Connor To test the sensitivity of our fabric, we have been using the Laser-Induced Particle Impact Test (LIPIT) platform designed by Keith Nelson's group at MIT's Institute for Soldier Nanotechnologies. This benchtop apparatus is designed for investigating how materials respond to microparticle impacts, such as in needle-free drug delivery and cold-sprayed industrial coatings. In our tests, we used the platform's high-speed particles to simulate space dust. In a typical experiment, we spread steel particles ranging from a few micrometers to tens of micrometers onto gold film atop a glass substrate, which we call a launchpad. For each shot, a laser pulse vaporizes the gold film, exerting an impulsive force on the particles and accelerating them to speeds of many hundreds of meters per second. A high-speed camera captures the impact of the gold particles on our target fabric swatch every few nanoseconds, equivalent to hundreds of millions of frames per second. So far, we've been able to detect electrical signals not only when the particles struck a sensor's surface but also when particles struck 1 or 2 cm away from the sensor. In some camera footage, it's even possible to see the acoustic wave created by the indirect impact propagating along the fabric's surface and eventually reaching the piezoelectric fiber. This promising data suggests that we can space out our sensors across the fabric and still be able to detect the impacts. Juliana Cherston and Joe Paradiso of MIT's Responsive Environments Group and Wei Yan of the Fibers@MIT group are part of the team behind the smart-textile experiment launching in February.Bob O'Connor Now we're working to nail down just how sensitive the fabric is—that is, what ranges of particle mass and velocity it can register. We're soon scheduled to test our fabric at a Van de Graaff accelerator, which can propel particles of a few micrometers in diameter to speeds of tens of kilometers per second, which is more in line with interstellar dust velocities. Beyond piezoelectrics, we're also interested in detecting the plumes of electric charge that form when a particle strikes the fabric at high speed. Those plumes contain clues about the impactor's constituent elements. One of our samples on the ISS is an electrically conductive synthetic fur made of silvered Vectran fibers. More typically used to reinforce electrical cables, badminton string, and bicycle tires, Vectran is also a key component in inflatable spacecraft. In our case, we manufactured it like a carpet or a fur coat. We believe this design may be well suited to catching the plumes of charge ejected from impact, which could make for an even more sensitive detector. Meanwhile, there's growing interest in porting sensored textiles to spacesuits. A few members in our group have worked on a preliminary concept that uses fabrics containing vibration, pressure, proximity, and touch sensors to discriminate between a glove, metallic equipment, and rocky terrain—just the sorts of surfaces that astronauts wearing pressurized suits would encounter. This sensor data is then mapped to haptic actuators on the astronauts' own skin, allowing wearers to vividly sense their surroundings right through their suits. A close-up of the circuit board that will be used to control the powered fabric sensors on the MISSE experiment.Bob O'Connor How else might a sensor-enhanced fabric enhance human engagement with the space environment? For long-duration missions, explorers residing for months inside a spacecraft or habitat will crave experiential variety. Fabric and thin-film sensors might detect the space weather just outside a spacecraft or habitat and then use that data to alter the lighting and temperature inside. A similar system might even mimic certain external conditions. Imagine feeling a Martian breeze within a habitat's walls or the touch of a loved one conveyed through a spacesuit. To Probe Further Cherston et al. "Large-Area Electronic Skins in Space: Vision and Preflight Characterization for First Aerospace Piezoelectric E-Textile," Proceedings of SPIE. Wicaksono, Cherston et al. "Electronic Textile Gaia: Ubiquitous Computational Substrates Across Geometric Scales," IEEE Pervasive Computing. Yan et al. "Thermally Drawn Advanced Functional Fibers: New Frontier of Flexible Electronics," Materials Today. Lee, Veysset et al. "Dynamics of supersonic microparticle impact on elastomers revealed by real-time multi-frame imaging," Nature. Veysset et al. "High-velocity micro-projectile impact testing," Applied Physics Review Letters. Funase, et al. "Mission to Earth–Moon Lagrange Point by a 6U CubeSat: EQUULEUS," IEEE Aerospace and Electronic Systems Magazine. To engineer a fabric that can survive extreme conditions, we foresee experimenting with piezoelectric materials that have intrinsic thermal and radiation resilience, such as boron nitride nanotubes, as well as devices that have better intrinsic noise tolerance, such as sensors based on glass fibers. We also envision building a system that can intelligently adapt to local conditions and mission priorities, by self-regulating its sampling rates, signal gains, and so on. Space-resilient electronic fabrics may still be nascent, but the work is deeply cross-cutting. Textile designers, materials scientists, astrophysicists, astronautical engineers, electrical engineers, artists, planetary scientists, and cosmologists will all have a role to play in reimagining the exterior skins of future spacecraft and spacesuits. This skin, the boundary of person and the demarcation of place, is real estate ripe for use. This article appears in the December 2021 print issue as "The Smartly Dressed Spacecraft."

  • The Hyperloop Is Hyper Old
    by Vaclav Smil on 26. Novembra 2021. at 16:00

    "Lord how this world improves as we grow older," reads the caption for a panel in the " March of Intellect," part of a series of colored etchings published between 1825 and 1829. The artist, William Heath (1794–1840), shows many futuristic contraptions, including a four-wheeled steam-powered horse called Velocity, a suspension bridge from Cape Town to Bengal, a gun-carrying platform lifted by four balloons, and a giant winged flying fish conveying convicts from England to New South Wales, in Australia. But the main object is a massive, seamless metallic tube taking travelers from East London's Greenwich Hill to Bengal, courtesy of the Grand Vacuum Tube Company. A public demonstration of the railway takes place in London in 1914. [top]; A 1910 photograph shows a working model of Émile Bachelet's magnetically levitated railway, in Mount Vernon, N.Y. [bottom] Émile Bachelet Collection/Archives Center/National Museum of American History Heath was no science-fiction pioneer. Hisfanciful etching was just a spoof of an engineering project proposed in 1825 and called the London and Edinburgh Vacuum Tunnel Company, which was to be established with the capital of 20 million pounds sterling. The concept was based on a 1799 proposal made by George Medhurst: A rectangular tunnel was to move goods in wagons, the vacuum was to be created by the condensation of steam, and the impetus was to be "the pressure of the atmosphere, so astonishing as almost to exceed belief." Yes, this is the first known attempt at what during the second decade of the 21st century became known as the hyperloop. That word, coined by Elon Musk, constitutes his main original contribution to the technology. By the time Heath was drawing his intercontinental conveyor, enough was known about vacuum to realize that it would be the best option for achieving unprecedented travel speeds. But no materials were available to build such a tube—above all, there was no way to produce affordable high-tensile steel—nor were there ready means to enclose people in vacuum-moving containers. Less than a century later, Émile Bachelet, a French electrician who emigrated to the United States, solved the propulsion part of the challenge with his 19 March 1912 patent of a "Levitation transmitting apparatus." In 1914, he presented a small-scale working model of a magnetically levitated train with a tubular prow, powerful magnets at the track's bottom, and tubular steel cars on an aluminum base. Virgin Hyperloop, which aims to commercialize the concept, has built a test track in Las Vegas [top]. The passenger pod [middle] is magnetically levitated; it can be introduced into the vacuum tube through an air lock [bottom] at the end.Virgin Hyperloop Japanese researchers have been experimenting with a modern version of Bachelet's maglev concept since 1969, testing open-air train models at a track in Miyazaki. Short trials were done in Germany and the Soviet Union. In 2002, China got the only operating maglev line—built by Siemens—running from the Shanghai Pudong International Airport to Shanghai; now China claims to be preparing to test it at speeds up to 1,000 kilometers per hour. But outside East Asia, maglev remained nothing but a curiosity until 2012, when Elon Musk put his spin on it. People unaware of this long history greeted the hyperloop as stunningly original and fabulously transformative. A decade later we have many route proposals, and many companies engaged in testing and design, but not a single commercial application that can demonstrate that this is an affordable, profitable, reliable, and widely replicable travel option. Vacuum physicists and railway engineers, who best appreciate the challenges involved in such projects, have pointed out a long list of fundamental difficulties that must be overcome before public-carrying vacuum tubes could be as common as steel-wheel high-speed rail. Other, nontrivial, problems run from the common and intractable—obtaining rights-of-way for hundreds, even thousands, kilometers of tracks elevated on pylons in NIMBY-prone societies—to the uncommon and unprecedented: maintaining the thousandfold pressure difference between the inside and outside steel walls of an evacuated tube along hundreds of kilometers of track while coping with the metal's thermal expansion. Before rushing to buy shares in a hyperloop venture in 2022, remember the 1825 London and Edinburgh Vacuum Tunnel Company.

  • SambaNova CEO: “We’re Built for Large”
    by Samuel K. Moore on 26. Novembra 2021. at 14:00

    AI, particularly the huge neural networks that meant to understand and interact with us humans, is not a natural fit for computer architectures that have dominated for decades. A host of startups recognized this in time to develop chips and sometimes the computers they'd power. Among them, Palo Alto-based SambaNova Systems is a standout. This summer the startup passed US $1 billion in venture funding to value the company at $5 billion. It aims to tackle the largest neural networks that require the most data using a custom-built stack of technology that includes the software, computer system, and processor, selling its use as a service instead of a package. IEEE Spectrum spoke to SambaNova CEO Rodrigo Liang in October 2021. Rodrigo Liang on… SambaNova's origin story What it takes to deliver huge AIs like GPT-3 AI as a service Things you can do with massive amounts of data IEEE Spectrum: What was the original idea behind SambaNova? Rodrigo Liang: This is the biggest transition since the internet, and most of the work done on AI is done on legacy platforms, legacy [processor] architectures that have been around for 25 or 30 years. (These architectures are geared to favor the flow of instructions rather than the flow of data.) We thought, let's get back to first principles. We're going to flip the paradigm on its head and not worry as much about the instructions but worry about the data, make sure that the data is where it needs to be. Remember, today, you have very little control how you move the data in a system. In legacy architectures, you can't control where the data is, which cache its sitting on. “Once we created the hardware, suddenly it opened up opportunities to really explore models like GPT-3.”—Rodrigo Liang, CEO SambaNova So we went back to first principles and said, "Let's just take a look at what AI actually wants, natively, not what other architectures cause AI to be." And what it wants is to actually create networks that are changing all the time. Neural nets have data paths that connect and reconnect as the algorithm changes. We broke things down to a different set of sub-operators. Today, you have add, subtract, multiply, divide, load, and store as your typical operators. Here, you want operators that help with dataflow—things like map, reduce, and filter. These are things that are much more data focused than instruction focused. Once you look at how these software programs want to be and how they want to flow, then the conclusion comes about what base units you need the amount of software controllability you need to allow these networks to interconnect and flow most efficiently. Once you've got to that point, then you realize "we can actually implement that in a processor"—a highly dense, highly efficient, highly performing piece of silicon with a single purpose of running AI efficiently. And that's what we built here with SambaNova. Back to top Is this an example of hardware-software co-development, a term that I am hearing more and more? Liang: 100 percent. The first step is you take the software, you break it down, just see natively what you want it to do. Then we build the hardware. And what the hardware allowed us to do is explore a much bigger problems than we could imagine before. In the developers' lab, things are small, because we can't handle production-size data sets. But once we created the hardware, suddenly it opened up opportunities to really explore models like GPT-3, which people are running using thousands of GPUs and with hundreds of people managing that one model. That's really impractical. How many companies are going to be able to afford to hire hundreds of people just to manage one model and have thousands of GPUs interconnected to run one thing? SambaNova Systems Cardinal SN10 Reconfigurable Dataflow Unit (RDU) is the industry's next-generation processor. RDUs are designed to allow the data to flow through the processor in ways in which the model was intended to run, freely and without any bottlenecks.SambaNova So we asked, "How do we automate all of this?" Today, we deploy GPT-3 on a customer's behalf, and we operate the model for them. The hardware we're delivering as a software service. These customers are subscribing to it and paying us a monthly fee for that prediction. So now we can ask, how well is the software operating? How well is the hardware operating? With each generation, you iterate, and you get better and better. That's opposed to traditional hardware design where once you build a microprocessor, you throw it over the fence, and then somebody does something with it, and maybe, eventually, you hear something about it. Maybe you don't. Because we define it from the software, we build the hardware, we deploy the software, we make our money off these services, then the feedback loop is closed. We are using what we build, and if it's not working well, we'll know very quickly. Back to top “We’re not trying to be everything to everybody. We’ve picked some lanes that we’re really good at and really focus on AI for production.” So you are spinning up new silicon that involves that feedback from the experience so far? Liang: Yeah. We're constantly building hardware; we're constantly building software—new software releases that do different things and are able to support new models that maybe people are just starting to hear about. We have strong ties to university research with Stanford, Cornell, and Purdue professors involved. We stay ahead and are able to look at what's coming; so our customers don't have to. They will trust that we can help them pick the right models that are coming down the pipeline. Is this hardware-and-software as service, full stack model of a computing company, the future in this space? Liang: We're the only ones doing it today and for a couple different reasons. For one, in order to do these differentiated services, you really need a piece of silicon that's differentiated. You start with people that can produce a high-performance piece of silicon to do this type of computing, that requires a certain skill set. But then to have the skill set to build a software stack and then have the skill set to create models on behalf of our customers and then have the skill set to deploy on a customer's behalf, those are all things that are really hard to do; it's a lot of work. For us, we've been able to do it because we're very focused on a certain set of workloads, a certain type of model, a certain type of use case that's most valuable to enterprises. We then focus on taking those to production. We're not trying to be everything to everybody. We've picked some lanes that we're really good at and really focus on AI for production. “How are [smaller and medium-sized companies] going to compete in this next age of AI? They need people that come in and provide them a lot of the infrastructure so they don't have to build it themselves.” For example, with natural language models, we're taking those for certain use cases and taking those to production. Image models, we're thinking about high resolution only. The world of AI is actually shockingly low res these days. [Today's computers] can't train high-res images; they have to downsample them. We're the only ones today that are able to do true resolution, original resolution, and train them as is. Back to top It sounds like your company has to have a staff that can understand the complete stack of the technology from software down to the chip. Liang: Yeah. That's one of the most differentiated advantages we have. Chip companies know how to do chips, but they don't understand the stack. AI companies know how to do AI, but they can't do silicon. And the compiler technology—think about... how few companies are actually writing languages. These technologies are hard for certain classes of people to really understand across the divide. We were able to assemble a team that can truly do it. If you want to do hardware-software co-design, you truly have to understand across the boundaries, because if you don't, then you're not getting the advantages of it. The other thing that I think you are also touching on is the expertise in the customer's own house. If you go outside of Fortune 50, most of them do not have an AI department with 200 data scientists that are A players. They might have 5. If you think about the expertise gap between these larger companies and your Fortune 500 company, how are they going to compete in this next age of AI? They need people that come in and provide them a lot of the infrastructure so they don't have to build it themselves. And most of those companies don't want to be AI centers. They have a very healthy business selling whatever they're selling. They just need the capabilities the AI brings. SambaNova Systems DataScale is an integrated software and hardware system optimized for dataflow from algorithms to silicon. SambaNova DataScale is the core infrastructure for organizations that want to quickly build and deploy next-generation AI technologies at scale.Samba Nova We do that on their behalf. Because everything is automated, we can service our systems and our platforms more efficiently than anybody else can. Other service companies would have to staff up on somebody else's behalf. But that wouldn't be practical. To the extent that there is a shortage of semiconductors, there is also a shortage of AI experts. So if I were to hire just as many as my customer had to hire, I couldn't scale the business up. But because I can do it automatically and much more efficiently, they don't have to hire all those people, and neither do I. “Give me the entire data set; don’t chop it up.” What's the next milestone you are looking towards? What are you working on? Liang: Well, we've raised over $1 billion in venture capital at $5 billion valuation, but the company's fairly young. We're just approaching a four-year anniversary, and so we've got a lot of aspirations for ourselves as far as being able to help a much broader set of customers. Like I said, if you really see how many companies are truly putting AI in production, it's still a very small percentage. So we're very focused on getting customers into production with AI and getting our solutions out there for people. You're going to see us talk a lot about large data and large models. If you've got hairy problems with too much data and the models you need are too big, that's our wheelhouse. We're not doing little ones. Our place is when you have big, big enterprise models with tons of data; let us crunch on that for you. We're going to deploy larger and larger models, larger and larger solutions for people. Back to top Tell me about a result that you that kind of took your breath away? What is one of the coolest things that you've seen that your system has done? Liang: One of our partners, Argonne National Labs, they're doing this project mapping the universe. Can you imagine this? They're mapping the universe. They've been doing a lot of work trying to map the universe [training an AI with] really high-resolution images they've taken over many, many years. Well, as you know, artifacts in the atmosphere can really cause a lot of problems. The accuracy is actually not very good. You have to downsample these images and stitch them together, and then you've got all the atmospheric noise. There are scientists that are much smarter than I am to figure all that stuff out. But we came in, shipped the systems, plugged it in and within 45 minutes, they were up and training. They mapped the whole thing without changing the image size and got a higher level of accuracy than what they had gotten for years before and in much, much less time. We're really proud of that. It's the type of thing that you're confident that your technology can do, and then you see amazing customers do something you didn't expect and get this tremendous result. Like I said, we're built for large. In e-commerce with all the uses and all of the products they've got, give me the entire data set; don't chop it up. Today, they have to chop it, because infrastructure doesn't allow it. In banking, all of the risks that you have across all your entities, well, let me see all the data. With all these different use cases, more data produces better results. We're convinced that if you have more data, it actually produces better results, and that's what we're built for.

  • The World’s Most Popular EVs Aren’t Cars, Trucks, or Motorcycles
    by Lawrence Ulrich on 25. Novembra 2021. at 14:00

    When the U.S. House of Representatives passed the Build Back Better Act last week, a lesser-recognized provision earmarked some $4.1 billion in tax credits to further stimulate an already booming EV market that Elon Musk hasn't even dabbled in. Electric bicycles, better known as e-bikes, have moved from novelty to mainstream with breathtaking speed. They've been a boon to hard-working delivery persons during the pandemic (and their impatient customers), and commuters who don't care to be a sweaty mess when they arrive. And while the scoffing tends to center around the "purity" of cycling—the idea that e-bike riders are somehow lazy cheaters—that electric assist is actually luring people off the couch for healthy exercise. That's especially welcome for older or out-of-practice riders (which describes a whole lot of folks) who might otherwise avoid cycling entirely, put off by daunting hills or longer distances. While powerful "Class 3" models especially are trying the patience of pedestrians in crowded cities like New York, with blazing assisted speeds approaching 30 mph, e-bikes are now front-and-center in discussions of future urban mobility. They're a way to potentially free up precious street space, provide alternatives to automobiles and reduce energy consumption and harmful emissions. California, through its powerful Air Resources Board, recently allocated $10 million in rebates for e-bike buyers, a smaller-scale version of state or federal tax breaks for EV car buyers. The possibilities are fueling cool tech ideas, from covered, rain-proof cargo bikes; to pavement-embedded wireless chargers and automated stabilization systems to help senior riders. CityQ is taking pre-orders for a four-wheeled cargo "bike" that it touts as cycling "with a Tesla feeling." In 2020, according to one estimate, 500,000 e-bikes were sold in the U.S. alone—compared to 210,000 plug-in cars. According to market research company NPD Group, the pandemic helped increase e-bike sales by 145 percent from 2019 to 2020, more than double the growth of traditional bikes. Exact figures on industry sales are hard to pin down; yet The New York Times quoted experts saying Americans bought roughly 500,000 e-bikes in 2020, compared to about 210,000 plug-in automobiles. Industry analysts expect that uptick in adoption to continue. A report by the Business Research Company shows the global e-bike market growing from $32.5 billion last year to $53 billion by 2025, for annual compound growth of 9.9 percent. Even in bike-saturated Europe, e-bike sales jumped by 23 percent in 2019. And Deloitte expects 300 million e-bikes on the world's streets by 2023. That's a lot of bikes, batteries and saved muscle power from thankful riders. If you're not up to speed on e-bikes, or you're curious about taking one for a spin, here's a look at some of the techs, terms and players: Pedal to the Metal The tech behind e-bikes falls into two simpler categories, even if the choice between them isn't as simple. Hub motors integrate a motor directly in the wheel center (either front or rear wheel), in an enclosed system that's independent from the bike chain and pedal drive. There are two main types: Geared hub motors incorporate internal planetary gears for reduction, allowing the motor to operate efficiently at high rpm while the bike wheel spins at a lower speed. Gearless hub motors directly link the motor's stator to the bike axle. That reduces a key point of weakness—the toothed gears. Aside from bearings, there are no moving parts, nothing to wear out. Hub motors are relatively affordable, low-maintenance, mass produced by the millions. A do-it-yourselfer can find entire, 800- to 1,000-watt hub motor kits for around $200, where mid-drive power can cost three to five times as much. Hub motors add no extra stress or wear to a chain or shifters, and offer another advantage versus a mid-drive set-up: If a hub motor conks out, you can still pedal home, and vice-versa; if a chain or pedal breaks, a rider can keep moving under electric power. The downsides? Nearly every hub motor has a single gear ratio; fine for the flats, not so good for hill climbs, where the motor can't match a mid-drive unit for a robust shove against gravity, and may even overheat on long ascents. Hub motors can also make a bike feel unbalanced and awkward to steer—like it's being pushed or pulled rather than pedaled. Tire changes are more difficult because of the wheel-mounted motor. Some electric bike companies claim up to 80 or even 100 miles of unassisted range, but expert riders say that would only be possible if most those miles were downhill. "Mid-drive" bikes, in contrast, locate the motor inside the frame and between pedals at the bottom bracket. Motor power is transferred through the chain drive to the rear wheel. As with EVs, those motors are growing lighter, stronger, quieter and more affordable. The biggest edge—with a parallel downside—is sending power through a traditional chain and gear seat: The motor can deliver major torque up a steep hill or from a standstill, in a lower gear and higher rpm, just as your pedals do. That energizer-style power keeps going and going, even on long climbs. The major con is the constant surge of power through the poor chain: A pro cyclist can generate roughly 400 watts of per over an hour. Most humans with normal-size thighs can't manage even half that. But e-bikes can generate up to 750 watts of continuous power. It's why most mid-drive e-bikes come with uprated chains. And if that chain snaps, you're not going anywhere, just as on an old-school bike. On the upside, newer mid-drive motors are notably smaller and lighter than hub units. Hidden inside frames, they're making some e-bikes look so stealthy that onlookers have no idea it's electric. For both types, a speed sensor or torque sensor detects pedal force or wheel rotation, and activates the motor for a helpful forward shove. Riders can typically adjust the level of electric assist, or just pedal harder for a corresponding boost in motor grunt. But mid-drive brings another advantage, with genuine torque sensors to detect the human power applied at the pedal crank, and smoothly dial in electric assist. Hub motors often use a simple cadence sensor at the wheel, and can produce jerky or unpredictable motor boost, especially going uphill. Battery Range vs. Reality A big issue with e-bike range claims is that there are so many variables: Rider weight, wind and tire resistance, varying terrain and topography. Some electric bike companies claim up to 80 or even 100 miles of unassisted range, but expert riders say that would only be possible if most those miles were downhill. As a general rule of thumb, throttle e-bikes that combine a 500-to-750 watt motor and a 480 watt-hour (Wh) battery can cover only about 20 miles at best on battery power alone; or less than 25 watt-hours per mile. Pedal-assisted bikes go farther: Figure about 15 watt-hours per mile, or 32 miles from that same 480 Wh battery, with a roughly "medium" level of preset electric assist. The price of that electric boost is weight. A lithium-ion battery usually adds a significant 6 to 8 pounds to the bike; weight that your legs must drive once its energy is depleted. As the speedsters of the e-bike world, Class 3 models are typically allowed only on "curb-to-curb" roadways or bike lanes, and restricted on bike trails or multi-use paths shared with pedestrians. Batteries can be mounted on rear racks for easy access and removal, at the price of less-than-ideal location: Too high and too rearward, which can affect handling. Batteries externally mounted on the downtube — the bar directly below the saddle — eliminate that issue, keeping weight low and along the bike's main axis. Batteries integrated inside the downtube create the sleekest profile, making these e-bikes look less bulky and more like a traditional cycle. 3, 2, 1, Go Spurred by PeopleForBikes, a national advocacy group and industry trade association, more than 30 states have adopted a "3-Class" system that standardizes e-bikes based on their type of assist and how fast they can propel you. All three classes limit a motor's go-power to 750 watts, or 1 horsepower. Class 1 e-bikes generate an electric boost only when you pedal, and reach a maximum assisted speed of 20 mph. Class 2 models also limit assisted speed to 20 mph. But they add a hand throttle, either a grip-twist as found on motorcycles, or a button that can drive the electric motor even when you're not pedaling. Class 3 bikes are the muscular alternative to Class 1. They're also exclusively pedal-assisted, but with a maximum boosted speed of 28 mph. Look out, LeMond: That's roughly as fast as a professional bicyclist can maintain speed for long distances over flat ground. The roadway infrastructure that each class can use, however, remains a crazy quilt of local, state or national regulations. As the speedsters of the e-bike world, Class 3 models are typically allowed only on "curb-to-curb" roadways or bike lanes, and restricted on bike trails or multi-use paths shared with pedestrians. In Europe, electric mountain bikes, or eMTB's, are largely welcome on non-motorized trails. For American riders, be aware that the U.S. Forest Service, Bureau of Land Management and National Park Service consider eMTBs as no different from a dirt bike, ATV or other motorized vehicle. So even Class 1 bikes are barred from non-motorized trails. Some states, including Pennsylvania, Utah and Colorado, have made exceptions for trails in state parks. The Players, And What You'll Pay E-bike prices range from as little as $1,200, for a Aventon 350 Pace 350 Step-Through, to $7,500 (or more) for "connected" bikes like the Stromer ST3 Sport. Stromer's luxurious "e-commuter" brings a powerful rear hub motor (with 600 watts and 44 Nm of torque), fat Pirelli tires, and connectivity features like GPS, remote locking and unlocking, stat readouts and over-the-air updates. Most of the biggest names in cycling have embraced e-bikes: Giant, Trek, Specialized, Schwinn. Even automakers like BMW, focused on expanding their mobility portfolios, are jumping into the game. Last week, Porsche took a majority stake in GreyP, the high-end Croatian bike company started by Mate Rimac, the electric hypercar entrepreneur and creator of the $2.4 million Rimac Nevera. Rimac himself controls Bugatti Rimac, with Porsche holding a minority stake in this newly combined purveyor of fantasy automobiles. That's all lofty company for a bicycle manufacturer: Imagine a technology trickle-down from seven-figure electric Rimacs and Bugattis to the bicycles you ride for work or play.

  • Revealed: Jupiter’s Secret Power Source
    by Ned Potter on 24. Novembra 2021. at 20:00

    For all its other problems, Earth is lucky. Warmed mostly by the sun, 150 million km away, shielded by a thin but protective atmosphere, the temperature at the surface averages 14 to 15 degrees Celsius—a good number to support liquid oceans and a riot of carbon-based life. Jupiter is a different story. Its upper atmosphere (Jupiter has no solid surface) has a temperature closer to what you'd find on Venus than on some of Jupiter's own moons. As will be seen below, planetary scientists have for decades puzzled over why this planet so far from the sun is so inexplicably warm. In 2021, however, the solution to the mystery may at last have been found. The solar system’s biggest planet has a big problem You are orbiting Jupiter, 779 million km from the sun, where physics and logic say it ought to be very, very cold. Sunlight, out here, is less than four percent as intense as it is on Earth. If solar heating were the only factor at play, the planet's upper atmosphere would average 70 degrees below zero Celsius. Jupiter in the infrared But it doesn’t. It exceeds 400 Celsius—and scientists have puzzled over it for half a century. They have sometimes spoken of Jupiter as having an “energy crisis.” Now, an international team led by James O'Donoghue of JAXA, the Japanese space agency, says they've found an answer. Jupiter’s northern (and southern) lights Jupiter's polar auroras are the largest and most powerful known in the solar system—and O'Donoghue says the energy in them, caused as Jupiter's atmosphere is buffeted by charged particles in its magnetic field, is strong enough to heat the outer atmosphere of the entire planet. "The auroral power, delivered by the auroral mechanism, is actually 100 terawatts per hemisphere, and I always like that fact," says O'Donoghue. "I think that's something like 100,000 power stations." The auroras had been suspected as Jupiter's secret heat source since the 1970s. But until now, scientists thought Jupiter's giant, swirling east-west cloud bands might shear the heat away before it could spread very far from the poles. Winds in the cloud bands reach 500 km/h. To try to solve the mystery, the research team set out to create an infrared heat map of Jupiter's atmosphere. They used the 10-meter Keck II telescope atop Mauna Kea in Hawaii, one of the five largest in the world, to take spectrographic readings of the planet on two nights: 14 April 2016 and 25 January 2017. Their April 2016 heat map (to be shown next) revealed that indeed the regions around the polar auroras were hottest, and the heat did spread from there—though the effect tailed off toward Jupiter's equator... The first night of Keck observations The heat was strong enough to propagate despite those powerful winds. It was a promising find, but they needed more. Fortunately their next observation turned up, in O'Donoghue's words, "something spectacular." The second night of Keck observations The auroras the team observed in January 2017 are about 100 degrees hotter than they were on the first night—and so are temperatures at every point from there to the equator. The researchers soon learned that Jupiter had around the time of their January 2017 observation been hit by an outsized surge in solar wind, ionized particles which would compress Jupiter's magnetic field and make the aurora more powerful. It was sheer luck—a “happy accident," says O'Donoghue—that the surge of particles happened on their second night. Such pulses of energy probably happen every few weeks on average, but it is hard to know exactly when. Other researchers had already tried to explain Jupiter's warmth by other means—perhaps some sort of acoustic-wave heating or convection from the planet's core, for instance—but they couldn't create convincing models that worked as well as the auroras. O'Donoghue and his colleagues worked for years on the resulting paper. They say they went through more than a dozen drafts before it was accepted for publication in the journal Nature earlier this year. Where does this lead? It's too early to say, but scientists will want to replicate the findings and then see if they also explain the heating they see on the other gas giants in the solar system—Saturn, Uranus and Neptune. Understanding of the auroral effects may also affect our picture of Jupiter's moons, including Europa and Ganymede, which are believed to have briny oceans beneath their icy outer crusts and may be good places to look for life. But we're getting ahead of ourselves. For now, the research continues. “It's funny," says O'Donoghue, “the reactions from some people in the field. Some people thought, 'Oh, yeah, we knew it was the aurora all along.' And then other people are saying, 'Are you sure it's the aurora?' It tells you there's an issue, and hopefully our observations have solved it definitively. “We once thought that it could happen, that the aurora could be the source," he says, “but we showed that it does happen." Photos, from top: A. Simon/Goddard Space Flight Center and M. H. Wong/University of California, Berkeley/OPAL/ESA/NASA; Gemini Observatory/AURA/NSF/UC Berkeley; J. Nichols/University of Leicester/ESA/NASA; JPL-Caltech/NASA; Kevin M. Gill/JPL-Caltech/SwRI/MSSS/NASA; Ethan Tweedie/W. M. Keck Observatory; A. Simon/Goddard Space Flight Center and M. H. Wong/University of California, Berkeley/OPAL/ESA/NASA; J. O'Donoghue/JAXA (heat maps) and STSCI/NASA (planet). This article appears in the December 2021 print issue as "Jupiter's Electric Blanket."

  • Paying Tribute to Former IEEE President Richard Gowen
    by Joanna Goodrich on 24. Novembra 2021. at 19:00

    Richard Gowen, 1984 IEEE president, died on 12 November at the age of 86. An active volunteer who held many high-level positions throughout the organization, Gowen was president of the IEEE Foundation from 2005 to 2011 and two years later was appointed as president emeritus of the IEEE Foundation. He was also past chair of the IEEE History Committee. "I, along with the IEEE staff and Board of Directors are deeply saddened by this loss," says Susan K. (Kathy) Land, 2021 IEEE president and CEO. "Dick served not only as IEEE president but was a dedicated advocate of the IEEE Foundation and a strong champion of the IEEE History Center. I know I speak for both the members of IEEE and supporters of the IEEE Foundation in extending our sincere sympathies to his family and colleagues." IEEE Foundation At the time of death, he was president and CEO of Dakota Power, a company in Rapid City, S.D., that develops lightweight electric drive systems for military and civilian use. EDUCATION Gowen was born in New Brunswick, N.J., and received his bachelor's degree in electrical engineering in 1957 from Rutgers University there. While at Rutgers, he participated in the school's ROTC. After graduating, he joined RCA Laboratories in Princeton, N.J., as a researcher but was called to active duty by the U.S. Air Force. He was a communications electronics officer at Yaak Air Force Station, in Montana. While there, he applied to join the electrical engineering faculty at the Air Force Academy, in Colorado Springs, Colo. He was accepted, and the academy sponsored his postgraduate studies at Iowa State University, in Ames. He earned a master's degree in electrical engineering in 1959 and a Ph.D. in biomedical engineering in 1962. For his doctoral research, he developed an engineering model of the cardiovascular system. His project led to the development of a device worn on a person's finger that measures blood pressure during physical exercise. He was granted his first U.S. patent for the technology. ASSISTING NASA Gowen began his academic career in 1962 as an electrical engineering professor at the Air Force Academy. He was selected in 1966 to be an astronaut in NASA's Apollo 1 program but withdrew after suffering a back injury that left him unable to walk. After undergoing an operation that restored his ability to walk, he returned to the academy. In addition to teaching, he led a research team to develop technology that could help NASA study the effects of weightlessness on astronauts' cardiovascular systems. The research was being conducted at a new lab NASA and the Air Force built at the academy. Gowen and his team worked with the astronauts of the Apollo and Skylab missions to virtually test and evaluate physiological changes that might have occurred during their long space missions. His research led to the development of the lower body negative pressure device, which can vary the transfer of fluids from the upper body to the lower body. It gave the research team "the ability to evaluate the movement of fluids on the cardiovascular system," Gowen wrote in an article about the research on the Engineering Technology and History Wiki. The device is now on display in Washington, D.C., at the Smithsonian National Space and Air Museum. Gowen served as a consultant for the U.S. Department of Defense while at the academy. He retired in 1977 with the rank of lieutenant colonel. He joined the South Dakota School of Mines and Technology, in Rapid City, in 1977 as vice president and dean of engineering. He left seven years later to serve as president of Dakota State College, now Dakota State University, in Madison, S.D. In 1987 he returned to South Dakota Mines as its president. Under his leadership, new engineering programs were created and graduate research projects were expanded. He also increased the number of projects that were conducted in collaboration with NASA and the U.S. military. After he retired from the school in 2003 he was appointed as a member of the South Dakota Department of Education. In that role, he was active in encouraging more Native Americans to pursue careers in science, technology, engineering, and math. After retiring, he led the conversion of the Homestake gold mine, in Lead, S.D., into a scientific laboratory in 2003 at the request of the U.S. National Science Foundation. The Deep Underground Science and Engineering Laboratory opened in 2009. Gowen was inducted into the South Dakota Hall of Fame in 2012 for his work in expanding academic research and STEM education. He helped found Dakota Power in 2007. ACTIVE VOLUNTEER Gowen joined IEEE in 1956 to give back to the engineering profession, gain leadership skills, and serve on boards and committees, according to the Wiki article. He was active in the IEEE Denver Section and was a founding member of the IEEE Pikes Peak Section, in Colorado Springs. He was the 1976 Region 5 director and a member of several boards including the IEEE Regional Activities board (now the IEEE Member and Geographic Activities board), the IEEE Standards Association Standards Board, and the IEEE Technical Activities board. "Over several decades, Dick made enormous contributions to IEEE, the IEEE Foundation, and the engineering profession," says IEEE Life Fellow Lyle Feisel, director emeritus of the IEEE Foundation. "He was a risk-taker who saw solutions where others saw only problems. Above all, he had enthusiasm, often belied by his low-key approach." Gowen was elevated to IEEE Fellow in 1981 in recognition of his contributions to space research and education. He played a major role in the merger of IEEE and Eta Kappa Nu to form the IEEE-Eta Kappa Nu honor society. Gowen was elevated in 2002 to eminent member of IEEE-HKN. He and his wife, Nancy, were avid supporters of the IEEE Foundation and IEEE History Center. Last year, thanks to their generous donation, the History Center was able to complete its GPS collection on its Engineering and Technology History Wiki. Now oral histories from all four GPS fathers—Brad Parkinson, James Spilker, Richard Schwartz, and Hugo Fruehauf—are available online. The Gowens were also members of the IEEE Heritage Circle and the IEEE Goldsmith Legacy League. The Heritage Circle acknowledges members who have pledged more than US $10,000 to support IEEE programs. Legacy League members have pledged money to the IEEE Foundation through a bequest in their will, trust, life insurance policy, or retirement plan. "Dick's contributions to IEEE and the IEEE Foundation were far-reaching, impactful, and impossible to measure," Karen Galuchie, IEEE Foundation executive director, says. "He was known as a servant leader and tirelessly dedicated his time, talent, and treasure to making IEEE stronger and more productive. His impression on IEEE will last forever." Gifts can be made in Gowen's memory to a variety of IEEE's philanthropic programs that were important to him such as the IEEE Foundation Fund, the IEEE History Center, and IEEE-HKN. The Gowen family will be notified of your donation unless you make your gift anonymously, according to Galuchie.

  • Learn How to Use a High-Performance Digitizer
    by Teledyne on 24. Novembra 2021. at 15:05

    Webinar: High-Performance Digitizer Basics Part 3: How to Use a High-Performance Digitizer Date: Tuesday, December 7, 2021 Time: 10 AM PST | 1 PM EST Duration: 45 minutes Join Teledyne SP Devices for part 3 in a three-part introductory webinar series on high-performance digitizers. Topics covered in this Part 3 of the webinar series: Interfacing to external systems Real-time digital signal processing Systems with few or many channels Software development and support tools Who should attend? Developers working with high-performance data acquisition systems that would like to understand the capabilities and building blocks of a digitizer. What attendees will learn? How digitizer features and functions can be used in different applications and measurement scenarios. Presenter: Thomas Elter, Senior Field Applications Engineer ** Click here to watch Part 1 "What is a High-Performance Digitizer?" on demand. ** Click here to watch Part 2 "How to Select a High-Performance Digitizer" on demand.

  • Years Later, Alphabet’s Everyday Robots Have Made Some Progress
    by Evan Ackerman on 23. Novembra 2021. at 23:51

    Last week, Google or Alphabet or X or whatever you want to call it announced that its Everyday Robots team has grown enough and made enough progress that it's time for it to become its own thing, now called, you guessed it, "Everyday Robots." There's a new website of questionable design along with a lot of fluffy descriptions of what Everyday Robots is all about. But fortunately, there are also some new videos and enough details about the engineering and the team's approach that it's worth spending a little bit of time wading through the clutter to see what Everyday Robots has been up to over the last couple of years and what their plans are for the near future. That close to the arm seems like a really bad place to put an E-Stop, right? Our headline may sound a little bit snarky, but the headline in Alphabet's own announcement blog post is "everyday robots are (slowly) leaving the lab." It's less of a dig and more of an acknowledgement that getting mobile manipulators to usefully operate in semi-structured environments has been, and continues to be, a huge challenge. We'll get into the details in a moment, but the high-level news here is that Alphabet appears to have thrown a lot of resources behind this effort while embracing a long time horizon, and that its investment is starting to pay dividends. This is a nice surprise, considering the somewhat haphazard state (at least to outside appearances) of Google's robotics ventures over the years. The goal of Everyday Robots, according to Astro Teller, who runs Alphabet's moonshot stuff, is to create "a general-purpose learning robot," which sounds moonshot-y enough I suppose. To be fair, they've got an impressive amount of hardware deployed, says Everyday Robots' Hans Peter Brøndmo: We are now operating a fleet of more than 100 robot prototypes that are autonomously performing a range of useful tasks around our offices. The same robot that sorts trash can now be equipped with a squeegee to wipe tables, and use the same gripper that grasps cups to open doors. That's a lot of robots, which is awesome, but I have to question what "autonomously" actually means along with what "a range of useful tasks" actually means. There is really not enough publicly available information for us (or anyone?) to assess what Everyday Robots is doing with its fleet of 100 prototypes, how much manipulator-holding is required, the constraints under which they operate, and whether calling what they do "useful" is appropriate. If you'd rather not wade through Everyday Robots' weirdly overengineered website, we've extracted the good stuff (the videos, mostly) and reposted them here, along with a little bit of commentary underneath each. Introducing Everyday Robots Everyday Robots 0:01 — Is it just me, or does the gearing behind those motions sound kind of, um, unhealthy? 0:25 — A bit of an overstatement about the Nobel Prize for picking a cup up off of a table, I think. Robots are pretty good at perceiving and grasping cups off of tables, because it's such a common task. Like, I get the point, but I just think there are better examples of problems that are currently human-easy and robot-hard. 1:13 — It's not necessarily useful to draw that parallel between computers and smartphones and compare them to robots, because there are certain physical realities (like motors and manipulation requirements) that prevent the kind of scaling to which the narrator refers. 1:35 — This is a red flag for me because we've heard this "it's a platform" thing so many times before and it never, ever works out. But people keep on trying it anyway. It might be effective when constrained to a research environment, but fundamentally, "platform" typically means "getting it to do (commercially?) useful stuff is someone else's problem," and I'm not sure that's ever been a successful model for robots. 2:10 — Yeah, okay. This robot sounds a lot more normal than the robots at the beginning of the video; what's up with that? 2:30 — I am a big fan of Moravec's Paradox and I wish it would get brought up more when people talk to the public about robots. The challenge of everyday Everyday Robots 0:18 — I like the door example, because you can easily imagine how many different ways it can go that would be catastrophic for most robots: different levers or knobs, glass in places, variable weight and resistance, and then, of course, thresholds and other nasty things like that. 1:03 — Yes. It can't be reinforced enough, especially in this context, that computers (and by extension robots) are really bad at understanding things. Recognizing things, yes. Understanding them, not so much. 1:40 — People really like throwing shade at Boston Dynamics, don't they? But this doesn't seem fair to me, especially for a company that Google used to own. What Boston Dynamics is doing is very hard, very impressive, and come on, pretty darn exciting. You can acknowledge that someone else is working on hard and exciting problems while you're working on different hard and exciting problems yourself, and not be a little miffed because what you're doing is, like, less flashy or whatever. A robot that learns Everyday Robots 0:26 — Saying that the robot is low cost is meaningless without telling us how much it costs. Seriously: "low cost" for a mobile manipulator like this could easily be (and almost certainly is) several tens of thousands of dollars at the very least. 1:10 — I love the inclusion of things not working. Everyone should do this when presenting a new robot project. Even if your budget is infinity, nobody gets everything right all the time, and we all feel better knowing that others are just as flawed as we are. 1:35 — I'd personally steer clear of using words like "intelligently" when talking about robots trained using reinforcement learning techniques, because most people associate "intelligence" with the kind of fundamental world understanding that robots really do not have. Training the first task Everyday Robots 1:20 — As a research task, I can see this being a useful project, but it's important to point out that this is a terrible way of automating the sorting of recyclables from trash. Since all of the trash and recyclables already get collected and (presumably) brought to a few centralized locations, in reality you'd just have your system there, where the robots could be stationary and have some control over their environment and do a much better job much more efficiently. 1:15 — Hopefully they'll talk more about this later, but when thinking about this montage, it's important to ask what of these tasks in the real world would you actually want a mobile manipulator to be doing, and which would you just want automated somehow, because those are very different things. Building with everyone Everyday Robots 0:19 — It could be a little premature to be talking about ethics at this point, but on the other hand, there's a reasonable argument to be made that there's no such thing as too early to consider the ethical implications of your robotics research. The latter is probably a better perspective, honestly, and I'm glad they're thinking about it in a serious and proactive way. 1:28 — Robots like these are not going to steal your job. I promise. 2:18 — Robots like these are also not the robots that he's talking about here, but the point he's making is a good one, because in the near- to medium term, robots are going to be most valuable in roles where they can increase human productivity by augmenting what humans can do on their own, rather than replacing humans completely. 3:16 — Again, that platform idea...blarg. The whole "someone has written those applications" thing, uh, who, exactly? And why would they? The difference between smartphones (which have a lucrative app ecosystem) and robots (which do not) is that without any third party apps at all, a smartphone has core functionality useful enough that it justifies its own cost. It's going to be a long time before robots are at that point, and they'll never get there if the software applications are always someone else's problem. Everyday Robots I'm a little bit torn on this whole thing. A fleet of 100 mobile manipulators is amazing. Pouring money and people into solving hard robotics problems is also amazing. I'm just not sure that the vision of an "Everyday Robot" that we're being asked to buy into is necessarily a realistic one. The impression I get from watching all of these videos and reading through the website is that Everyday Robot wants us to believe that it's actually working towards putting general purpose mobile manipulators into everyday environments in a way where people (outside of the Google Campus) will be able to benefit from them. And maybe the company is working towards that exact thing, but is that a practical goal and does it make sense? The fundamental research being undertaken seems solid; these are definitely hard problems, and solutions to these problems will help advance the field. (Those advances could be especially significant if these techniques and results are published or otherwise shared with the community.) And if the reason to embody this work in a robotic platform is to help inspire that research, then great, I have no issue with that. But I'm really hesitant to embrace this vision of generalized in-home mobile manipulators doing useful tasks autonomously in a way that's likely to significantly help anyone who's actually watching Everyday Robotics' videos. And maybe this is the whole point of a moonshot vision—to work on something hard that won't pay off for a long time. And again, I have no problem with that. However, if that's the case, Everyday Robots should be careful about how it contextualizes and portrays its efforts (and even its successes), why it's working on a particular set of things, and how outside observers should set our expectations. Over and over, companies have overpromised and underdelivered on helpful and affordable robots. My hope is that Everyday Robots is not in the middle of making the exact same mistake.

  • Fathers Can Be Gender Equity Advocates
    by Qusi Alqarqaz on 23. Novembra 2021. at 19:00

    In my article "A Father's Perspective About Daughters and Engineering," published in 2016, I shared my frustration about the lack of role models and the cultural messages that had left my two brilliant daughters—and many of their female friends—with little interest in pursuing an engineering career. After the article was published, I received an email from Michelle Travis, who was writing a book about dads and daughters. She wanted to know my thoughts about creating a stronger pipeline for girls to pursue a science, technology, engineering, or math (STEM) career and what could be done to change the narrative about engineering to highlight its public-service role. Travis is a professor at the University of San Francisco School of Law, where she co-directs its Work Law and Justice Program. She researches and writes about employment discrimination law, gender stereotypes, and work/family integration. She is also a founding member of the Work and Family Researchers Network and serves on the board of directors of the nonprofit Fathering Together. Her latest book, Dads for Daughters, is a guide for engaging male allies in support of gender equity. (I was one of the fathers featured in the book.) She has written the award-winning My Mom Has Two Jobs, a children's picture book that celebrates working mothers. Over the years, we have stayed in touch, followed each other's work, and looked for other ways to collaborate. In the past few months, I became frustrated by the news of girls from certain countries either not being allowed to go to school or risking their safety even when they were officially allowed to attend. That is one reason I felt I needed to talk to Travis and learn from her about what else could be done to change the way fathers and men in general think about women's abilities and the successes women have had in almost every field including engineering. Last month I asked her a few questions about her book and about what fathers can do to better support women. In the following interview, she gives a sneak peek of her book and lists several resources for engineering dads who want to encourage their daughters to pursue a STEM career. QA: Why did you, a lawyer, decide to research and write about fathers and their daughters? Is it personal? MT: My interest in engaging dads of daughters as gender equity advocates is both professional and personal. I've spent years as a lawyer and law professor using legal tools to advance women's equality in the workplace—seeking stronger employment-discrimination laws, equal-pay practices, and family-leave policies. Over time, I realized that the law has limits to what it can accomplish. I also realized that we've asked women to do too much of the heavy lifting to break down barriers and crack glass ceilings. Most importantly, I realized that progress requires commitment from male leaders who hold positions of power. I started asking myself how women might engage more men in gender-equity efforts. At the same time, I noticed the powerful effect that my two daughters were having on my husband. He had always viewed women's equality as an important goal, but it wasn't until he started thinking about the world his daughters were entering that he fully internalized his personal responsibility and his own power to have an impact. Having daughters fueled his urgency to act. He wanted to become an outspoken advocate for girls and women, rather than just a bystander. "Fathers who are engineers are uniquely positioned to become allies for expanding opportunities for girls and women." Watching this transformation is what prompted my study of the father-daughter relationship. I discovered that my husband's experience was not unique. Researchers have found that having a daughter tends to increase a man's support for antidiscrimination laws, equal-pay policies, and reproductive rights, and it tends to decrease men's support of traditional gender roles. This has significant effects in the workplace. For example, dads of daughters are more likely than other male leaders to champion gender diversity. And CEOs who are dads of daughters tend to have smaller gender wage gaps in their company than in those run by men who aren't fathers. Of course, many men without a daughter are women's allies, and not all dads with daughters are gender-equity advocates. We've even heard some men—including prominent politicians—invoke their "father of a daughter" status in disingenuous ways. But most dads of daughters are genuinely interested in advancing equal opportunities for girls and women. This makes the father-daughter relationship an excellent entry place for inviting men into partnerships to build a more equitable world. QA: Why should people read your book? MT: Today's dads are raising confident, empowered daughters who believe they can achieve anything. But the world is still unequal, with workplaces run by men, a gender pay gap, and deeply ingrained gender stereotypes. My book celebrates the role that fathers can play in creating a better world for the next generation of girls. Inspired by their daughters, fathers are well positioned to become powerful allies for girls and women. But in a post-#MeToo world, it can be difficult for men to step in and speak up. That's where Dads for Daughters can help. It arms fathers with the data they need to advocate for gender equity. It also offers concrete strategies for how they can make a difference in a variety of areas, from sports fields to science labs, and boardrooms to ballot boxes. In addition to being a guidebook, it also shares stories of fathers who have already joined the fight. All the men highlighted credited their daughters for motivating them to focus more on gender equity. They include a CEO who invested in female entrepreneurs to run part of his company's supply chain and a lawyer who created part-time positions at his firm—which keeps women on a partnership track. There is also a head coach who hired the NBA's first female assistant coach. Another is a governor who broke from his party line to sign a bill expanding rights for sexual assault victims. There is an engineer who provided computer skills training to support girls who were victims of India's sex trafficking trade. In addition, there's a teacher, a U.S. Army colonel, a pipe fitter, a firefighter, and a construction contractor, who joined forces to battle for parity in girls' high school sports programs. All those dads, and many others, were inspired to support gender equity because of their daughters. Their stories can motivate other dads to get involved. Dads who are committed to seeing their daughters achieve their dreams have an opportunity to improve the world that their daughters will enter, and Dads for Daughters will support them on this journey. QA: What do you think fathers who are engineers can do differently from other dads, and why? MT: Fathers who are engineers are uniquely positioned to become allies for expanding opportunities for girls and women. We all know that there's a huge gender imbalance in STEM fields. It results in an enormous loss of talent. Dads of daughters can take small but impactful steps in their homes, communities, and workplaces to welcome more girls and women into engineering careers. At home, fathers can fill their home with books, toys, and activities that empower girls to imagine themselves as future engineers. There are some wonderful resources created by engineering dads for this very purpose. For example, finding a lack of engineering role models for his daughter, Greg Helmstetter created the STEAMTeam 5 book series, which shares the adventures of five girls who tackle challenges with their STEM skills. Anthony Onesto was inspired by his daughters to create the Ella the Engineer comic-book series, which features a superhero girl who uses her engineering know-how to solve problems and save the world. Other great children's books include Andrea Beaty's Rosie Revere, Engineer, Tanya Lee Stone's Who Says Women Can't Be Computer Programmers? and Mike Adamick's Dad's Book of Awesome Science Experiments. Dads of daughters can also follow Ken Denmead's GeekDad blog, check out the Go Science Girls website, and buy one of Debbie Sterling's GoldieBlox engineering kits for their daughter's next birthday. Dads who are engineers can have an even broader impact in their community by volunteering with a girl tech organization such as EngineerGirl, TechGirlz, Girls Who Code, Girl Develop It, or CoolTechGirls. These organizations are always looking for engineers to share their expertise and passion for STEM careers with talented young girls. Engineer dads can also become gender-equity leaders at their workplace. Hiring, mentoring, and sponsoring women is a critical step in expanding women's representation in the engineering field. Dads can further support women by joining programs such as Million Women Mentors or partnering with IEEE Women in Engineering or the Society of Women Engineers. The empathy that dads gain from their daughters can also enable them to create a safer workplace culture by combating hostile work environments and speaking out against gender bias. QA: From a grown daughter's perspective, what makes fathers different from husbands or friends? MT: In a recent survey, dads rated strength and independence among the top qualities they hoped to instill in their daughters—which is different from the characteristics that men value most in their wives. From a daughter's perspective, this can make fathers particularly effective allies on their behalf. When dads are engaged in their daughters' lives, the relationship has a singularly profound impact. Involved dads raise women who are more confident, have higher self-esteem, and have better mental health. Girls with supportive dads have stronger cognitive abilities and are more likely to stay in school and achieve greater financial success. Involved dads also help daughters enter healthier adult relationships with other men. For fathers, the daughter relationship is a powerful way to build men's empathy skills and increase men's awareness of sex discrimination and gender inequality. For example, men often gain a better understanding of work/family integration challenges while watching their adult daughters juggle career and motherhood demands. Researchers have found that dads of daughters often have more credibility with other men when supporting gender equity. When people advocate for a position that appears to be at odds with their own self-interest, others often react with surprise, anger, and resentment. These reactions go away if the speaker identifies a vested interest in the outcome. This means that invoking one's status as the father of a daughter can grant men "standing" to advocate for gender equity in ways that get others to listen. Because men tend to pay attention to dads of daughters who talk about the importance of women's rights, that makes fathers particularly strong recruiters of other male allies as well.

  • Surviving the Robocalypse
    by Mark Pesce on 23. Novembra 2021. at 16:00

    Does the value of a job lie in how long it resists automation? Over the course of the pandemic, I saw a growing wave of mealtime deliveries: riders whizzing by silently on electric bicycles, ferrying takeout meals to folks in my urban neighborhood who don't want to venture out of their homes. Under constant pressure to pick up and deliver meals before they go cold, these delivery workers toil for some of the lowest wages on offer. In the past, delivery was an entry-level position, a way to get a foot into the door, like working in the mail room. Today, it's a business all on its own, with gigantic public companies such as Uber and Deliveroo providing delivery services for restaurant owners. With that outsourcing, delivery has become a dead-end job. Success means only that you get to work the day shift. Just a few years ago, we believed these jobs would be gone—wiped out by Level 4 and Level 5 autonomous driving systems. Yet, as engineers better understand the immense challenges of driving on roads crowded with some very irrational human operators, a task that once seemed straightforward now looks nearly intractable. Other tasks long thought to be beyond automation have recently taken great leaps forward, though. At the end of June, for example, GitHub previewed its AI pair programmer, Copilot: a set of virtual eyes that works with developers to keep their code clean and logically correct. Copilot falls short of a complete solution—it wouldn't come up with a sophisticated algorithm on its own—but it shows us how automation can make weak programmers stronger. While it's unlikely that most programming or copywriting will be done by machines anytime soon, those professions now face real competition from automation. It won't be long before massive AI language models like Microsoft and Nvidia's Megatron-Turing Natural Language Generation (MT-NLG) make short work of basic business copywriting. Other writing jobs—digesting materials to extract key details, expressing them in accessible language, then preparing them for publication—are also surrendering to automation. The elements for this transformative leap are already falling into place. While it's unlikely that most programming or copywriting will be done by machines anytime soon, an increasing portion will. Those professions now face real competition from automation. Paradoxically, bicycle-based deliveries look likely to need a human mind behind the handlebars for at least the next several years. In a world where software eats everything in sight, those bits that can't be digested continue to require human attention. That attention requires people's time—for which they can earn a living. What we pay people for performing their jobs will increasingly be measured against the cost of using a machine to perform that task. Some white-collar workers will, no doubt, suffer from these new forms of competition from machines. A century ago, farm labor faced a similar devaluation, as agriculture became mechanized. And while countless manufacturing jobs have succumbed to factory automation over the decades, Tesla production hiccups reveal what happens when you try to push automation too far on the factory floor. As the history of the Luddites so aptly demonstrates, the tension between machines and human labor isn't new—but it's growing again now, this time striking at the heart of knowledge work. To stay one step ahead of the machines, we'll need to find the hard bits and maintain the skills required to keep crunching on them. Creativity, insight, wisdom, and empathy—these aptitudes are wholly human and look to remain that way into the future. If we lean into these qualities, we can resist the competitive rise of the machines.

  • Advance Your Career With Rutgers’ Mini MBA Program for Engineers
    by Johanna Perez on 22. Novembra 2021. at 19:00

    Professionals who specialize in engineering and technology management must understand cross-discipline concepts and contribute to multifunctional teams. Although technical expertise is important, it is not enough for long-term career growth and success. Many engineers and technical professionals lack vital skills. And with the recent transition to working remotely, many organizations aren't doing enough to train them. The consequences are telling, according to LinkedIn Learning. LinkedIn's guide, "How Learning Programs Attract and Retain Top Talent," says employees who feel their career goals are being sidelined are 12 times more likely to consider leaving their job. By investing in leadership development programs for employees, organizations have been able to retain their best talent. In a survey of employers by CareerBuilder on the impact of hiring people with advanced degrees, 32 percent saw an increase in retention. That is why IEEE partnered with the Rutgers Business School to provide the only mini MBA program designed for teams of engineers and technical professionals. "This course was well structured and gave us a taste of the world outside of the engineering realm." Recently ranked as one of the top three mini MBA programs by Forbes, the IEEE | Rutgers Online Mini-MBA for Engineers is an entirely virtual program that offers foundational courses traditionally taught in master of business administration programs. Courses cover accounting, business communication, business ethics, entrepreneurship, finance, managerial economics, management, marketing, operations, and strategic management. Completion of the program allows learners to: Understand how organizational decisions are made from both operational and technical points of view. Gain knowledge on how various teams within an organization can better work together to meet goals. Leverage their new business skills in order to align their technical know-how with business strategy. FEEDBACK Here is what a couple of program graduates are saying: "The reason I took this course was to get a better understanding of 'the other side,'" says IEEE Senior Member Sohaib Qamar Sheikh, a technology associate at a large commercial property development and investment company in the United Kingdom. "This course was well structured and gave us a taste of the world outside of the engineering realm. It has helped me get a better understanding of various other dimensions associated with our business products." "I decided to take the program for the following reasons: It is cost-effective, rich in content, and flexible to fit my schedule," says IEEE Member Anis Ben Arfi, a systems engineer with Analog Devices. "Undertaking my mini MBA course, I learned various skills and improved my potential to handle business operations with ease. I am more informed about trade secrets and patents, more familiar with the product life cycle, different metrics to assess and measure customer experiences, and agile project management processes. My advice for aspiring applicants is: If you get an opportunity to be on board with this journey, please grab it." HOW TO SIGN UP Registration is now open for individuals interested in participating in next year's sessions. Two sessions are available. One begins in March; the other in September. The deadline to register for the March session is 4 February, and the deadline to register for the September session is 15 August. Individuals interested in registering can contact an IEEE account specialist. The IEEE | Rutgers Online Mini-MBA for Engineers is also offered to organizations interested in getting access for groups of 10 or more. If you are interested in group access and pricing, including the option of a customized capstone designed for your organization's needs, contact an IEEE account specialist. COMPLIMENTARY VIRTUAL EVENTS Interested in learning more about leadership? Here are two free IEEE on-demand virtual events that can help future leaders bridge the gap between business and engineering as they prepare for growth into management roles: Lessons in Leadership: Preparing the Future Leaders of Your Engineering Workforce. Building Engineering Leaders in the 21st Century.

  • Can Earth's Digital Twins Help Us Navigate the Climate Crisis?
    by Edd Gent on 22. Novembra 2021. at 16:03

    Powerful climate models have helped dispel any uncertainty about the scale of the climate crisis the world faces. But these models are large global simulations that can't tell us much about how climate change will impact our daily lives or how to respond at a local level. That's where a digital twin of the Earth could help. A digital twin is a virtual model of a real-world object, machine, or system that can be used to assess how the real-world counterpart is performing, diagnose or predict faults, or simulate how future changes could alter its behavior. Typically, a digital twin involves both a digital simulation and live sensor data from the real world system to keep the model up to date. So far, digital twins have primarily been used in industrial contexts. For example, a digital twin could monitor an electric grid or manufacturing equipment. But there's been growing interest in applying similar ideas to the field of climate simulation to provide a more interactive, and detailed, way to track and predict changes in the systems, such as the atmosphere and oceans, that drive the Earth's climate. Now chipmaker Nvidia has committed to building the world's most powerful supercomputer dedicated to modeling climate change. Speaking at the company's GPU Technology Conference, CEO Jensen Huang said Earth-2 would be used to create a digital twin of Earth in the Omniverse—a virtual collaboration platform that is Nvidia's attempt at a metaverse. "We may finally have a way to simulate the earth's climate 10, 20, or 30 years from now, predict the regional impact of climate change, and take action to mitigate and adapt before it's too late," said Huang. The announcement was light on details, and a spokesman for Nvidia said the company was currently unable to confirm what the architecture of the computer would look like or who would have access to it. But in his talk Huang emphasized the significant role the company sees for machine learning to boost the resolution and speed of climate models and create a digital twin of the Earth. Today, most climate simulation is driven by complex equations that describe the physics behind key processes. Many of these equations are very computationally expensive to solve and so, even on the most powerful supercomputers, models normally only achieve resolutions of 10 to 100 kilometers. Some important processes, such as the behavior of clouds that reflect the Sun's radiation back to space, operate at scales of just a few meters though, said Huang. He thinks machine learning could help here. Alongside announcing Earth-2, the company also unveiled a new machine learning framework called Modulus designed to help researchers train neural networks to simulate complex physical systems by learning from observed data or the output of physical models. "The resulting model can emulate physics 1,000 to 100,000 times faster than simulation," said Huang. "With Modulus, scientists will be able to create digital twins to better understand large systems like never before." Improving the resolution of climate models is a key ingredient for an effective digital twin of Earth, says Bjorn Stevens, director of the Max Planck Institute for Meteorology. Today's climate models currently rely on statistical workarounds that work well for assessing the climate at a global scale, but make it hard to understand local effects. That will be crucial for predicting the regional impacts of climate change so that we can better inform adaptation efforts, he says. But Steven is skeptical that machine learning is some kind of magic bullet to solve this problem. "There is this fantasy somehow that the machine learning will replace the things that we know how to solve physically, but I think it will always have a disadvantage there." The key to creating a digital twin is making a system that is highly interactive, he says, and the beauty of a physical model is that it replicates every facet of the process in an explainable way. That's something that a machine learning model trained to mimic the process may not be able to do. That's not to say there is no place for machine learning, he adds. It is likely to prove useful in helping speeding up workflows, compressing data and potentially developing new models in areas where we have lots of data but little understanding of the physics—for instance how water moves through earth and land. But he thinks the rapid advances in supercomputing power means that running physical models at much higher resolution is more a case of will and resources than capabilities. The European Union hopes to fill that gap with a new initiative called Destination Earth, which was formally launched in January. The project is a joint effort by the European Space Agency, the European Organisation for the Exploitation of Meteorological Satellites, and the European Centre for Medium-Range Weather Forecasts (ECMWF). The goal is to create a platform that can bring together a wide variety of models, simulating both key aspects of the climate like the atmosphere and the oceans, but also human systems, says Peter Bauer, deputy director of research at ECMWF. "So you're not only monitoring and simulating precipitation and temperature, but also what that means for agriculture, or water availability, or infrastructure," he says. The result won't be a single homogeneous simulation of every aspect of Earth, says Bauer, but an interactive platform that allows users to pull in whatever models and data are necessary to answer the questions they're interested in. The project will be implemented gradually over the coming decade, but the first two digital twins they hope to deliver will include one aimed at anticipating extreme weather events like floods and forest fires, and another aimed at providing longer-term predictions to support climate adaptation and mitigation efforts. While Nvidia's announcement of a new supercomputer dedicated to climate modeling is welcome, Bauer says the challenge today is more about software engineering than developing new hardware. Most of the critical models have been developed in isolation using very different approaches, so getting them to talk to each other and finding ways to interface highly disparate data streams is an outstanding problem. "Part of the challenge to actually hide the diversity and complexity of these components away from the user and make them work together," Bauer says. Correction 24 Nov. 2021: An update was made to the description of machine learning’s utility for digital earths—it could be useful, the story now reads, in understanding how water moves through earth on land (not the mechanics of dirt as the original version of the story stated).

  • The Chip Shortage Hurts Auto Sales a Lot, Consumer Electronics Only a Little
    by Matthew S. Smith on 22. Novembra 2021. at 16:00

    Hot consumer tech is hard to snag this holiday season. Get used to it. New-car shoppers in the United States, China, and everywhere else face slim inventory and dealers unwilling to budge on price. It's all because of the global chip shortage, which has prompted the Biden administration to support legislation that includes US $52 billion in federal subsidies for U.S. semiconductor manufacturing. But the problem extends far beyond new cars. A report by The Information found that 70 percent of wireless retail stores in the United States faced smartphone shortages. Graphics card pricing remains well above the manufacturer's suggested retail level and shows no sign of retreat. Game consoles are drawing hundreds-long lines a full year after launch. Televisions are both more expensive and more difficult to find than last year. You might think this a temporary, COVID-related supply-chain shortfall, but no. The problem is not the number of PlayStation 5 consoles in stock. The problem is the people in line ahead of you. Sony's PlayStation 5 sales data illustrates the nature of the challenge. Global sales of the PlayStation 5 outpace those of the PlayStation 4 at this point in the product's life cycle: The PS5 has sold more quickly than any other console in Sony's history. The same pattern holds for PCs, smartphones, video games, and tablets, which all saw an uptick in year-over-year sales during the first quarter of 2021. That's quite an achievement, given the unprecedented, lockdown-driven highs of 2020. The serious chip shortage really is hobbling the production of automobiles, the largest and most expensive of all our consumer gadgets. But it's a mistake to assume that this shortage limits supplies of lesser gadgets, most of which are in fact pouring into stores and then flying off the shelves. The automotive industry's problems really are the result of a serious chip shortage. But that's the exception: Most consumer tech is pouring into stores, then flying off the shelves. You should expect unrelenting prices and very long lead times that only lengthen. If you want truly in-demand gear to unwrap for the holidays, whether it's a game console or the new iPad Mini, it may already be too late to get it (from a retailer, at least—there's always eBay). And you should plan to plan ahead for the next year, as there's no sign that supply will catch up in 2022. This may annoy shoppers, but the disruption among consumer tech companies is even more dire. Record demand is typically a good thing, but the sudden surge has forced a competition for chip production that only the largest companies can win. Rumors hint that Apple has locked in most, if not all, leading-edge chip production from Taiwan Semiconductor Manufacturing Co., the world's largest independent semiconductor foundry. Apple's order is said to include up to 100 million chips for new iPhones, iPads, and MacBooks. Even large companies like Qualcomm are struggling to compete with Apple's size and volume. Big moves from big companies have the trickle-down effect of delaying innovative ideas from smaller players: a crank-powered game console, a customizable LED face mask, and a tiny, 200-watt USB charger are just three out of hundreds of examples. The result could be a subtle, unfortunate squeeze on tiny tech startups that can spoil the most conservative production timeline. Backers are likely to face ever-increasing waits. Some will give up and demand a refund. So, should you learn to live with stock notifications and long lines indefinitely? Maybe not. Investment in production might well catch up with demand by 2023. Industry analysts worry this could lead to a price crash if semiconductor manufacturers overshoot. Perhaps the summer of 2023 will be the year you can once again buy the latest consumer tech not just minutes but hours after it's released. Until then, well, you'll just have to be patient. This article appears in the December 2021 print issue as "When the Chips Are Down."

  • A Smart Artificial Pancreas Could Conquer Diabetes
    by Boris Kovatchev on 21. Novembra 2021. at 16:00

    In some ways, this is a family story. Peter Kovatchev was a naval engineer who raised his son, Boris, as a problem solver, and who built model ships with his granddaughter, Anna. He also suffered from a form of diabetes in which the pancreas cannot make enough insulin. To control the concentration of glucose in his blood, he had to inject insulin several times a day, using a syringe that he kept in a small metal box in our family's refrigerator. But although he tried to administer the right amount of insulin at the right times, his blood-glucose control was quite poor. He passed away from diabetes-related complications in 2002. Boris now conducts research on bioengineered substitutes for the pancreas; Anna is a writer and a designer. A person who requires insulin must walk a tightrope. Blood-glucose concentration can swing dramatically, and it is particularly affected by meals and exercise. If it falls too low, the person may faint; if it rises too high and stays elevated for too long, the person may go into a coma. To avoid repeated episodes of low blood glucose, patients in the past would often run their blood glucose somewhat high, laying themselves open to long-term complications, such as nerve damage, blindness, and heart disease. And patients always had to keep one eye on their blood glucose levels, which they measured many times a day by pricking their fingers for drops of blood. It was easily the most demanding therapy that patients have ever been required to administer to themselves. No longer: The artificial pancreas is finally at hand. This is a machine that senses any change in blood glucose and directs a pump to administer either more or less insulin, a task that may be compared to the way a thermostat coupled to an HVAC system controls the temperature of a house. All commercial artificial pancreas systems are still "hybrid," meaning that users are required to estimate the carbohydrates in a meal they're about to consume and thus assist the system with glucose control. Nevertheless, the artificial pancreas is a triumph of biotechnology. It is a triumph of hope, as well. We well remember a morning in late December of 2005, when experts in diabetes technology and bioengineering gathered in the Lister Hill Auditorium at the National Institutes of Health in Bethesda, Md. By that point, existing technology enabled people with diabetes to track their blood glucose levels and use those readings to estimate the amount of insulin they needed. The problem was how to remove human intervention from the equation. A distinguished scientist took the podium and explained that biology's glucose-regulation mechanism was far too complex to be artificially replicated. Boris Kovatchev and his colleagues disagreed, and after 14 years of work they were able to prove the scientist wrong. It was yet another confirmation of Arthur Clarke's First Law: "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong." In a healthy endocrine system, the fasting blood glucose level is around 80 to 100 milligrams per deciliter of blood. The entire blood supply of a typical adult contains 4 or 5 grams of sugar—roughly as much as in the paper packet that restaurants offer with coffee. Consuming carbohydrates, either as pure sugar or as a starch such as bread, causes blood glucose levels to rise. A normally functioning pancreas recognizes the incoming sugar rush and secretes insulin to allow the body's cells to absorb it so that it can be used as energy or stored for such use later on. This process brings the glucose level back to normal. However, in people with type 1 or insulin-requiring type 2 diabetes—of whom there are nearly 8.5 million in the United States alone—the pancreas produces either no insulin or too little, and the control process must be approximated by artificial means. In the early days, this approximation was very crude. In 1922, insulin was first isolated and administered to diabetic patients in Canada; for decades after, the syringe was the primary tool used to manage diabetes. Because patients in those days had no way to directly measure blood glucose, they had to test their urine, where traces of sugar proved only that blood-glucose levels had already risen to distressingly high levels. Only in 1970 did ambulatory blood-glucose testing become possible; in 1980 it became commercially available. Chemically treated strips reacted with glucose in a drop of blood, changing color in relation to the glucose concentration. Eventually meters equipped with photodiodes and optical sensors were devised to read the strips more precisely. The first improvement was in the measurement of blood glucose; the second was in the administration of insulin. The first insulin pump had to be worn like a backpack and was impractical for daily use, but it paved the way for all other intravenous blood-glucose control designs, which began to emerge in the 1970s. The first commercial "artificial pancreas" was a refrigerator-size machine called the Biostator, intended for use in hospitals. However, its bulk and its method of infusing insulin directly into a vein prevented it from advancing beyond hospital experiments. The original artificial pancreas, called the Biostator, is shown here in hospital use in about 1977. It delivered insulin and glucose directly into the veins and could not be adapted to home use.William Clarke/University of Virginia That decade also saw work on more advanced insulin-delivery tools: pumps that could continually infuse insulin through a needle placed under the skin. The first such commercial pump, Dean Kamen's AutoSyringe, was introduced in the late 1970s, but the patient still had to program it based on periodic blood-glucose measurements done by finger sticks. Through all this time, patients continued to depend on finger sticks. Finally, in 1999, Medtronic introduced the first continuous glucose monitor portable enough for outpatient use. A thin electrode is inserted under the skin with a needle and then connected to the monitor, which is worn against the body. Abbott and Dexcom soon followed with devices presenting glucose data in real time. The accuracy of such meters has consistently improved over the past 20 years, and it is thanks to those advances that an artificial pancreas has become possible. The ultimate goal is to replicate the entire job of the pancreatic control system, so that patients will no longer have to minister to themselves. But mimicking a healthy pancreas has proven exceptionally difficult. Fundamentally, blood-glucose management is a problem in optimization, one that is complicated by meals, exercise, illness, and other external factors that can affect metabolism. In 1979, the basis for solving this problem was introduced by the biomedical engineers Richard Bergman and Claudio Cobelli, who described the human metabolic system as a series of equations. In practice, however, finding a solution is hard for three main reasons: Insulin-action delay: In the body, insulin is secreted in the pancreas and shunted directly into the bloodstream. But when injected under the skin, even the fastest insulins take from 40 minutes to an hour to reach the peak of their action. So the controller of the artificial pancreas must plan on lowering blood glucose an hour from now—it must predict the future. Inconsistency: Insulin action differs between people, and even within the same person at different times. Sensor inaccuracy: Even the best continuous glucose monitors make mistakes, sometimes drifting in a certain direction—showing glucose levels that are either too low or too high, a problem that can last for hours. The artificial pancreas reproduces the healthy body's glucose-control system, which begins when carbohydrates are digested into glucose and ferried by the blood to the pancreas, which senses the increased glucose concentration and secretes just enough insulin to enable the body's cells to absorb the glucose. Two control systems based in the pancreas cooperate to keep blood-glucose concentrations within healthy bounds. One uses insulin to lower high levels of glucose, the other uses another hormone, called glucagon, to raise low levels. Today's artificial pancreas relies on insulin alone, but two-hormone systems are being studied. Chris Philpot What's more, the system must take into account complex external influences so that it works just as well for a middle-aged man sitting at a desk all day as for a teenager on a snowboard, rocketing down a mountainside. To overcome these problems, researchers have proposed various solutions. The first attempt was a straightforward proportional-integral-derivative (PID) controller in which insulin is delivered proportionally to the increase of blood-glucose levels and their rate of change. This method is still used by one commercial system, from Medtronic, after many improvements of the algorithm that adjusts the reaction of the PID to the pace of subcutaneous insulin transport. A more sophisticated approach is the predictive control algorithm, which uses a model of the human metabolic system, such as the one proposed in 1979 by Bergman and Cobelli. The point is to predict future states and thereby partially compensate for the delayed diffusion of subcutaneous insulin into the bloodstream. Yet another experimental controller uses two hormones—insulin, to lower blood-glucose levels, and glucagon, to raise it. In each of these approaches, modeling work went far to create the conceptual background for building an artificial pancreas. The next step was to actually build it. To design a controller, you must have a way of testing it, for which biomedical engineering has typically relied on animal trials. But such testing is time consuming and costly. In 2007, our group at the University of Virginia proposed using computer-simulation experiments instead. Together with our colleagues at the University of Padua, in Italy, we created a computer model of glucose-insulin dynamics that operated on 300 virtual subjects with type 1 diabetes. Our model described the interaction over time of glucose and insulin by means of differential equations representing the best available estimates of human physiology. The parameters of the equation differed from subject to subject. The complete array of all physiologically feasible parameter sets described the simulated population. In January 2008, the U.S. Food and Drug Administration (FDA) made the unprecedented decision to accept our simulator as a substitute for animal trials in the preclinical testing of artificial pancreas controllers. The agency agreed that such in silico simulations were sufficient for regulatory approval of inpatient human trials. Suddenly, rapid and cost-effective algorithm development was a possibility. Only three months later, in April of 2008, we began using the controller we'd designed and tested in silico in real people with type 1 diabetes. The UVA/Padua simulator is now in use by engineers worldwide, and animal experiments for testing of new artificial pancreas algorithms have been abandoned. Perhaps one day it will make sense to implant the artificial pancreas within the abdominal cavity, where the insulin can be fed directly into the bloodstream, for still faster action. Meanwhile, funding was expanding for research on other aspects of the artificial pancreas. In 2006 the JDRF (formerly the Juvenile Diabetes Research Foundation) started work on a device at several centers in the U.S. and across Europe; in 2008 the U.S. National Institutes of Health launched a research initiative; and from 2010 to 2014, the European Union–funded AP@Home consortium was active. The global frenzy of rapid prototyping and testing bore fruit: The first outpatient studies took place from September 2011 through January 2012 at camps for diabetic children in Israel, Germany, and Slovenia, where children with type 1 diabetes were monitored overnight using a laptop-based artificial pancreas system. Most of these early studies rated the artificial pancreas systems as better than manual insulin therapy in three ways. The patients spent more time within the target range for blood glucose, they had fewer instances of low blood glucose, and they had better control during sleep—a time when low blood glucose levels can be hard to detect and to manage. But these early trials all relied on laptop computers to run the algorithms. The next challenge was to make the systems mobile and wireless, so that they could be put to the test under real-life conditions. Our team at UVA developed the first mobile system, the Diabetes Assistant, in 2011. It ran on an Android smartphone, had a graphical interface, and was capable of Web-based remote observation. First, we tested it on an outpatient basis in studies that lasted from a few days to 6 months. Next, we tried it on patients who were at high risk because they had suffered from frequent or severe bouts of low blood glucose. Finally we stress-tested the system in children with type 1 diabetes who were learning to ski at a 5-day camp. In 2016, a pivotal trial ended for the first commercial hybrid system—the MiniMed 670G—which automatically controlled the continuous rate of insulin throughout the day but not the additional doses of insulin that were administered before a meal. The system was cleared by the FDA for clinical use in 2017. Other groups around the world were also testing such systems, with overwhelmingly good results. One 2018 meta-analysis of 40 studies, totaling 1,027 participants, found that patients stayed within their blood-glucose target range (70–180 mg/dL) about 15 percent more of the time while asleep and nearly 10 percent more overall, as compared to patients receiving standard treatment. Our original machine's third-generation descendant—based on Control-IQ technology and made by Tandem Diabetes Care in San Diego—underwent a six-month randomized trial in teenagers and adults with type 1 diabetes, ages 14 and up. We published the results in the New England Journal of Medicine in October 2019. The system uses a Dexcom G6 continuous glucose monitor—one that no longer requires calibration by finger-stick samples—an insulin pump from Tandem, and the control algorithm originally developed at UVA. The algorithm is built right in to the pump, which means the system does not require an external smartphone to handle the computing. Control-IQ still requires some involvement from the user. Its hybrid control system asks the person to push a button saying "I am eating" and then enter the estimated amount of carbohydrates; the person can also push a button saying "I am exercising." These interventions aren't absolutely necessary, but they make the control better. Thus, we can say that today's controllers can be used for full control, but they work better as hybrids. The system has a dedicated safety module that either stops or slowly attenuates the flow of insulin whenever the system predicts low blood glucose. Also, it gradually increases insulin dosing overnight, avoiding the tendency toward morning highs and aiming for normalized glucose levels by 7 a.m. The six-month trial tested Control-IQ against the standard treatment, in which the patient does all the work, using information from a glucose monitor to operate an insulin pump. Participants using Control-IQ spent 11 percent more time in the target blood-glucose range and cut in half—from 2.7 percent to 1.4 percent—the time spent below the low-glucose redline, which is 70 mg/dL. In December 2019, the FDA authorized the clinical use of Control-IQ for patients 14 and up, and our system thus became the first "interoperable automated insulin-dosing controller," one that can connect to various insulin pumps and continuous glucose monitors. Patients can now customize their artificial pancreases. Selected Artificial Pancreas Projects From Around the World 01 Two that have been approved by the FDA: Medtronic MiniMed 670G Control-IQ from Tandem Diabetes Care 02 Beta Bionics 03 Bigfood Biomedical 04 Diabeloop 05 DreaMed Diabetes 06 EOPancreas 07 Inreda Diabetic 08 Eli Lilly and Ypsomed 09 Pancreum 10 Then there are the many, many DIY projects underway. 11 And research proceeds on the potential of a fully implantable artificial pancreas The FDA approval came almost 14 years to the day after the expert in that Maryland conference room stated that the problem was unsolvable. A month after the approval, Control-IQ was released to users of Tandem's insulin pump as an online software upgrade. And in June 2020, following another successful clinical trial in children with type 1 diabetes between 6 and 13 years old, the FDA approved Control-IQ for ages 6 and up. Children can benefit from this technology more than any other age group because they are the least able to manage their own insulin dosages. In April 2021, we published an analysis of 9,400 people using Control-IQ for one year, and this real-life data confirmed the results of the earlier trials. As of 1 September 2021, Control-IQ is used by over 270,000 people with diabetes in 21 countries. To date, these people have logged over 30 million days on this system. One parent wrote Tandem about how eight weeks on the Control-IQ had drastically reduced his son's average blood-glucose concentration. "I have waited and toiled 10 years for this moment to arrive," he wrote. "Thank you." Progress toward better automatic control will be gradual; we anticipate a smooth transition from hybrid to full autonomy, when the patient never intervenes. Work is underway on using faster-acting insulins that are now in clinical trials. Perhaps one day it will make sense to implant the artificial pancreas within the abdominal cavity, where the insulin can be fed directly into the bloodstream, for still faster action. What comes next? Well, what else seems impossible today? This article appears in the December 2021 print issue as "Creating the Artificial Pancreas."

  • The Femtojoule Promise of Analog AI
    by Geoffrey W. Burr on 20. Novembra 2021. at 16:00

    Machine learning and artificial intelligence (AI) have already penetrated so deeply into our life and work that you might have forgotten what interactions with machines used to be like. We used to ask only for precise quantitative answers to questions conveyed with numeric keypads, spreadsheets, or programming languages: "What is the square root of 10?" "At this rate of interest, what will be my gain over the next five years?" But in the past 10 years, we've become accustomed to machines that can answer the kind of qualitative, fuzzy questions we'd only ever asked of other people: "Will I like this movie?" "How does traffic look today?" "Was that transaction fraudulent?" Deep neural networks (DNNs), systems that learn how to respond to new queries when they're trained with the right answers to very similar queries, have enabled these new capabilities. DNNs are the primary driver behind the rapidly growing global market for AI hardware, software, and services, valued at US $327.5 billion this year and expected to pass $500 billion in 2024, according to the International Data Corporation. Convolutional neural networks first fueled this revolution by providing superhuman image-recognition capabilities. In the last decade, new DNN models for natural-language processing, speech recognition, reinforcement learning, and recommendation systems have enabled many other commercial applications. But it's not just the number of applications that's growing. The size of the networks and the data they need are growing, too. DNNs are inherently scalable—they provide more reliable answers as they get bigger and as you train them with more data. But doing so comes at a cost. The number of computing operations needed to train the best DNN models grew 1 billionfold between 2010 and 2018, meaning a huge increase in energy consumption And while each use of an already-trained DNN model on new data—termed inference—requires much less computing, and therefore less energy, than the training itself, the sheer volume of such inference calculations is enormous and increasing. If it's to continue to change people's lives, AI is going to have to get more efficient. We think changing from digital to analog computation might be what's needed. Using nonvolatile memory devices and two fundamental physical laws of electrical engineering, simple circuits can implement a version of deep learning's most basic calculations that requires mere thousandths of a trillionth of a joule (a femtojoule). There's a great deal of engineering to do before this tech can take on complex AIs, but we've already made great strides and mapped out a path forward. AI’s Fundamental Function The most basic computation in an artificial neural network is called multiply and accumulate. The output of artificial neurons [left, yellow] are multiplied by the weight values connecting them to the next neuron [center, light blue]. That neuron sums its inputs and applies an output function. In analog AI, the multiply function is performed by Ohm's Law, where the neuron's output voltage is multiplied by the conductance representing the weight value. The summation at the neuron is done by Kirchhoff's Current Law, which simply adds all the currents entering a single node The biggest time and energy costs in most computers occur when lots of data has to move between external memory and computational resources such as CPUs and GPUs. This is the "von Neumann bottleneck," named after the classic computer architecture that separates memory and logic. One way to greatly reduce the power needed for deep learning is to avoid moving the data—to do the computation out where the data is stored. DNNs are composed of layers of artificial neurons. Each layer of neurons drives the output of those in the next layer according to a pair of values—the neuron's "activation" and the synaptic "weight" of the connection to the next neuron. Most DNN computation is made up of what are called vector-matrix-multiply (VMM) operations—in which a vector (a one-dimensional array of numbers) is multiplied by a two-dimensional array. At the circuit level these are composed of many multiply-accumulate (MAC) operations. For each downstream neuron, all the upstream activations must be multiplied by the corresponding weights, and these contributions are then summed. Most useful neural networks are too large to be stored within a processor's internal memory, so weights must be brought in from external memory as each layer of the network is computed, each time subjecting the calculations to the dreaded von Neumann bottleneck. This leads digital compute hardware to favor DNNs that move fewer weights in from memory and then aggressively reuse these weights. A radical new approach to energy-efficient DNN hardware occurred to us at IBM Research back in 2014. Together with other investigators, we had been working on crossbar arrays of nonvolatile memory (NVM) devices. Crossbar arrays are constructs where devices, memory cells for example, are built in the vertical space between two perpendicular sets of horizontal conductors, the so-called bitlines and the wordlines. We realized that, with a few slight adaptations, our memory systems would be ideal for DNN computations, particularly those for which existing weight-reuse tricks work poorly. We refer to this opportunity as "analog AI," although other researchers doing similar work also use terms like "processing-in-memory" or "compute-in-memory." There are several varieties of NVM, and each stores data differently. But data is retrieved from all of them by measuring the device's resistance (or, equivalently, its inverse—conductance). Magnetoresistive RAM (MRAM) uses electron spins, and flash memory uses trapped charge. Resistive RAM (RRAM) devices store data by creating and later disrupting conductive filamentary defects within a tiny metal-insulator-metal device. Phase-change memory (PCM) uses heat to induce rapid and reversible transitions between a high-conductivity crystalline phase and a low-conductivity amorphous phase. Flash, RRAM, and PCM offer the low- and high-resistance states needed for conventional digital data storage, plus the intermediate resistances needed for analog AI. But only RRAM and PCM can be readily placed in a crossbar array built in the wiring above silicon transistors in high-performance logic, to minimize the distance between memory and logic. We organize these NVM memory cells in a two-dimensional array, or "tile." Included on the tile are transistors or other devices that control the reading and writing of the NVM devices. For memory applications, a read voltage addressed to one row (the wordline) creates currents proportional to the NVM's resistance that can be detected on the columns (the bitlines) at the edge of the array, retrieving the stored data. To make such a tile part of a DNN, each row is driven with a voltage for a duration that encodes the activation value of one upstream neuron. Each NVM device along the row encodes one synaptic weight with its conductance. The resulting read current is effectively performing, through Ohm's Law (in this case expressed as "current equals voltage times conductance"), the multiplication of excitation and weight. The individual currents on each bitline then add together according to Kirchhoff's Current Law. The charge generated by those currents is integrated over time on a capacitor, producing the result of the MAC operation. These same analog in-memory summation techniques can also be performed using flash and even SRAM cells, which can be made to store multiple bits but not analog conductances. But we can't use Ohm's Law for the multiplication step. Instead, we use a technique that can accommodate the one- or two-bit dynamic range of these memory devices. However, this technique is highly sensitive to noise, so we at IBM have stuck to analog AI based on PCM and RRAM. Unlike conductances, DNN weights and activations can be either positive or negative. To implement signed weights, we use a pair of current paths—one adding charge to the capacitor, the other subtracting. To implement signed excitations, we allow each row of devices to swap which of these paths it connects with, as needed. Nonvolatile Memories for Analog AI ​Phase-change memory's conductance is set by the transition between a crystalline and an amorphous state in a chalcogenide glass. In resistive RAM, conductance depends on the creation and destruction of conductive filaments in an insulator. In resistive RAM, conductance depends on the creation and destruction of conductive filaments in an insulator. Flash memory stores data as charge trapped in a "floating gate." The presence or absence of that charge modifies conductances across the device. Electrochemical RAM acts like a miniature battery. Pulses of voltage on a gate electrode modulate the conductance between the other two terminals by the exchange of ions through a solid electrolyte. With each column performing one MAC operation, the tile does an entire vector-matrix multiplication in parallel. For a tile with 1,024 × 1,024 weights, this is 1 million MACs at once. In systems we've designed, we expect that all these calculations can take as little as 32 nanoseconds. Because each MAC performs a computation equivalent to that of two digital operations (one multiply followed by one add), performing these 1 million analog MACs every 32 nanoseconds represents 65 trillion operations per second. We've built tiles that manage this feat using just 36 femtojoules of energy per operation, the equivalent of 28 trillion operations per joule. Our latest tile designs reduce this figure to less than 10 fJ, making them 100 times as efficient as commercially available hardware and 10 times better than the system-level energy efficiency of the latest custom digital accelerators, even those that aggressively sacrifice precision for energy efficiency. It's been important for us to make this per-tile energy efficiency high, because a full system consumes energy on other tasks as well, such as moving activation values and supporting digital circuitry. There are significant challenges to overcome for this analog-AI approach to really take off. First, deep neural networks, by definition, have multiple layers. To cascade multiple layers, we must process the VMM tile's output through an artificial neuron's activation—a nonlinear function—and convey it to the next tile. The nonlinearity could potentially be performed with analog circuits and the results communicated in the duration form needed for the next layer, but most networks require other operations beyond a simple cascade of VMMs. That means we need efficient analog-to-digital conversion (ADC) and modest amounts of parallel digital compute between the tiles. Novel, high-efficiency ADCs can help keep these circuits from affecting the overall efficiency too much. Recently, we unveiled a high-performance PCM-based tile using a new kind of ADC that helped the tile achieve better than 10 trillion operations per watt. A second challenge, which has to do with the behavior of NVM devices, is more troublesome. Digital DNNs have proven accurate even when their weights are described with fairly low-precision numbers. The 32-bit floating-point numbers that CPUs often calculate with are overkill for DNNs, which usually work just fine and with less energy when using 8-bit floating-point values or even 4-bit integers. This provides hope for analog computation, so long as we can maintain a similar precision. Given the importance of conductance precision, writing conductance values to NVM devices to represent weights in an analog neural network needs to be done slowly and carefully. Compared with traditional memories, such as SRAM and DRAM, PCM and RRAM are already slower to program and wear out after fewer programming cycles. Fortunately, for inference, weights don't need to be frequently reprogrammed. So analog AI can use time-consuming write-verification techniques to boost the precision of programming RRAM and PCM devices without any concern about wearing the devices out. That boost is much needed because nonvolatile memories have an inherent level of programming noise. RRAM's conductivity depends on the movement of just a few atoms to form filaments. PCM's conductivity depends on the random formation of grains in the polycrystalline material. In both, this randomness poses challenges for writing, verifying, and reading values. Further, in most NVMs, conductances change with temperature and with time, as the amorphous phase structure in a PCM device drifts, or the filament in an RRAM relaxes, or the trapped charge in a flash memory cell leaks away. There are some ways to finesse this problem. Significant improvements in weight programming can be obtained by using two conductance pairs. Here, one pair holds most of the signal, while the other pair is used to correct for programming errors on the main pair. Noise is reduced because it gets averaged out across more devices. We tested this approach recently in a multitile PCM-based chip, using both one and two conductance pairs per weight. With it, we demonstrated excellent accuracy on several DNNs, even on a recurrent neural network, a type that's typically sensitive to weight programming errors. Vector-Matrix Multiplication with Analog AI Vector-matrix multiplication (VMM) is the core of a neural network's computing [top]; it is a collection of multiply-and-accumulate processes. Here the activations of artificial neurons [yellow] are multiplied by the weights of their connections [light blue] to the next layer of neurons [green]. For analog AI, VMM is performed on a crossbar array tile [center]. At each cross point, a nonvolatile memory cell encodes the weight as conductance. The neurons' activations are encoded as the duration of a voltage pulse. Ohm's Law dictates that the current along each crossbar column is equal to this voltage times the conductance. Capacitors [not shown] at the bottom of the tile sum up these currents. A neural network's multiple layers are represented by converting the output of one tile into the voltage duration pulses needed as the input to the next tile [right]. Different techniques can help ameliorate noise in reading and drift effects. But because drift is predictable, perhaps the simplest is to amplify the signal during a read with a time-dependent gain that can offset much of the error. Another approach is to use the same techniques that have been developed to train DNNs for low-precision digital inference. These adjust the neural-network model to match the noise limitations of the underlying hardware. As we mentioned, networks are becoming larger. In a digital system, if the network doesn't fit on your accelerator, you bring in the weights for each layer of the DNN from external memory chips. But NVM's writing limitations make that a poor decision. Instead, multiple analog AI chips should be ganged together, with each passing the intermediate results of a partial network from one chip to the next. This scheme incurs some additional communication latency and energy, but it's far less of a penalty than moving the weights themselves. Until now, we've only been talking about inference—where an already-trained neural network acts on novel data. But there are also opportunities for analog AI to help train DNNs. DNNs are trained using the backpropagation algorithm. This combines the usual forward inference operation with two other important steps—error backpropagation and weight update. Error backpropagation is like running inference in reverse, moving from the last layer of the network back to the first layer; weight update then combines information from the original forward inference run with these backpropagated errors to adjust the network weights in a way that makes the model more accurate. The Tiki-Taka Solution Analog AI can reduce the power consumption of training neural networks, but because of some inherent characteristics of the nonvolatile memories involved, there are some complications. Nonvolatile memories, such as phase-change memory and resistive RAM, are inherently noisy. What's more, their behavior is asymmetric. That is, at most points on their conductance curve, the same value of voltage will produce a different change in conductance depending on the voltage's polarity. One solution we came up with, the Tiki-Taka algorithm, is a modification to backpropagation training. Crucially, it is significantly more robust to noise and asymmetric behavior in the NVM conductance. This algorithm depends on RRAM devices constructed to conduct in both directions. Each of these is initialized to their symmetry point—the spot on their conductance curve where the conductance increase and decrease for a given voltage are exactly balanced. In Tiki-Taka, the symmetry-point-balanced NVM devices are involved in weight updates to train the network. Periodically, their conductance values are programmed onto a second set of devices, and the training devices are returned to their natural symmetry point. This allows the neural network to train to high accuracy, even in the presence of noise and asymmetry that would completely disrupt the conventional backpropagation algorithm. The backpropagation step can be done in place on the tiles but in the opposite manner of inferencing—applying voltages to the columns and integrating current along rows. Weight update is then performed by driving the rows with the original activation data from the forward inference, while driving the columns with the error signals produced during backpropagation. Training involves numerous small weight increases and decreases that must cancel out properly. That's difficult for two reasons. First, recall that NVM devices wear out with too much programming. Second, the same voltage pulse applied with opposite polarity to an NVM may not change the cell's conductance by the same amount; its response is asymmetric. But symmetric behavior is critical for backpropagation to produce accurate networks. This is only made more challenging because the magnitude of the conductance changes needed for training approaches the level of inherent randomness of the materials in the NVMs. There are several approaches that can help here. For example, there are various ways to aggregate weight updates across multiple training examples, and then transfer these updates onto NVM devices periodically during training. A novel algorithm we developed at IBM, called Tiki-Taka, uses such techniques to train DNNs successfully even with highly asymmetric RRAM devices. Finally, we are developing a device called electrochemical random-access memory (ECRAM) that can offer not just symmetric but highly linear and gradual conductance updates. The success of analog AI will depend on achieving high density, high throughput, low latency, and high energy efficiency—simultaneously. Density depends on how tightly the NVMs can be integrated into the wiring above a chip's transistors. Energy efficiency at the level of the tiles will be limited by the circuitry used for analog-to-digital conversion. But even as these factors improve and as more and more tiles are linked together, Amdahl's Law—an argument about the limits of parallel computing—will pose new challenges to optimizing system energy efficiency. Previously unimportant aspects such as data communication and the residual digital computing needed between tiles will incur more and more of the energy budget, leading to a gap between the peak energy efficiency of the tile itself and the sustained energy efficiency of the overall analog-AI system. Of course, that's a problem that eventually arises for every AI accelerator, analog or digital. The path forward is necessarily different from digital AI accelerators. Digital approaches can bring precision down until accuracy falters. But analog AI must first increase the signal-to-noise ratio (SNR) of the internal analog modules until it is high enough to demonstrate accuracy equivalent to that of digital systems. Any subsequent SNR improvements can then be applied toward increasing density and energy efficiency. These are exciting problems to solve, and it will take the coordinated efforts of materials scientists, device experts, circuit designers, system architects, and DNN experts working together to solve them. There is a strong and continued need for higher energy-efficiency AI acceleration, and a shortage of other attractive alternatives for delivering on this need. Given the wide variety of potential memory devices and implementation paths, it is quite likely that some degree of analog computation will find its way into future AI accelerators. This article appears in the December 2021 print issue as "Ohm's Law + Kirchhoff's Current Law = Better AI."

  • Supercomputers Flex Their AI Muscles
    by Samuel K. Moore on 20. Novembra 2021. at 15:00

    Scientific supercomputing is not immune to the wave of machine learning that's swept the tech world. Those using supercomputers to uncover the structure of the universe, discover new molecules, and predict the global climate are increasingly using neural networks to do so. And as is long-standing tradition in the field of high-performance computing, it's all going to be measured down to the last floating-point operation. Twice a year, publishes a ranking of raw computing power using a value called Rmax, derived from benchmark software called Linpack. By that measure, it's been a bit of a dull year. The ranking of the top nine systems are unchanged from June, with Japan's Supercomputer Fugaku on top at 442,010 trillion floating point operations per second. That leaves the Fujitsu-built system a bit shy of the long-sought goal of exascale computing—one-million trillion 64-bit floating-point operations per second, or exaflops. But by another measure—one more related to AI—Fugagku and its competitor the Summit supercomputer at Oak Ridge National Laboratory have already passed the exascale mark. That benchmark, called HPL-AI, measures a system's performance using the lower-precision numbers—16-bits or less—common to neural network computing. Using that yardstick, Fugaku hits 2 exaflops (no change from June 2021) and Summit reaches 1.4 (a 23 percent increase). By one benchmark, related to AI, Japan's Fugaku and the U.S.'s Summit supercomputers are already doing exascale computing. But HPL-AI isn't really how AI is done in supercomputers today. Enter MLCommons, the industry organization that's been setting realistic tests for AI systems of all sizes. It released results from version 1.0 of its high-performance computing benchmarks, called MLPerf HPC, this week. The suite of benchmarks measures the time it takes to train real scientific machine learning models to agreed-on quality targets. Compared to MLPerf HPC version 0.7, basically a warmup round from last year, the best results in version 1.0 showed a 4- to 7-fold improvement. Eight supercomputing centers took part, producing 30 benchmark results. As in MLPerf's other benchmarking efforts, there were two divisions: "Closed" submissions all used the same neural network model to ensure a more apples-to-apples comparison; "open" submissions were allowed to modify their models. The three neural networks trialed were: CosmoFlow uses the distribution of matter in telescope images to predict things about dark energy and other mysteries of the universe. DeepCAM tests the detection of cyclones and other extreme weather in climate data. OpenCatalyst, the newest benchmark, predicts the quantum mechanical properties of catalyst systems to discover and evaluate new catalyst materials for energy storage. In the closed division, there were two ways of testing these networks: Strong scaling allowed participants to use as much of the supercomputer's resources to achieve the fastest neural network training time. Because it's not really practical to use an entire supercomputer-worth of CPUs, accelerator chips, and bandwidth resources on a single neural network, strong scaling shows what researchers think the optimal distribution of resources can do. Weak scaling, in contrast, breaks up the entire supercomputer into hundreds of identical versions of the same neural network to figure out what the system's AI abilities are in total. Here's a selection of results: Argonne National Laboratories used its Theta supercomputer to measure strong scaling for DeepCAM and OpenCatalyst. Using 32 CPUs and 129 Nvidia GPUs, Argonne researchers trained DeepCAM in 32.19 minutes and OpenCatalyst in 256.7 minutes. Argonne says it plans to use the results to develop better AI algorithms for two upcoming systems, Polaris and Aurora. The Swiss National Supercomputing Centre used Piz Daint to train OpenCatalyst and DeepCAM. In the strong scaling category, Piz Daint trained OpenCatalyst in 753.11 minutes using 256 CPUs and 256 GPUs. It finished DeepCAM in 21.88 minutes using 1024 of each. The center will use the results to inform algorithms for its upcoming Alps supercomputer. Fujitsu and RIKEN used 512 of Fugaku's custom-made processors to perform CosmoFlow in 114 minutes. It then used half of the complete system—82,944 processors—to perform the weak scaling benchmark on the same neural network. That meant training 637 instances of CosmoFlow, which it managed to do at an average of 1.29 models per minutes for a total of 495.66 minutes (not quite 8 hours). Helmholtz AI, a joint effort of Germany's largest research centers, tested both the JUWELS and HoreKa supercomputers. HoreKa's best effort was to chug through DeepCAM in 4.36 minutes using 256 CPUs and 512 GPUs. JUWELS did it in as little as 2.56 minutes using 1024 CPUs and 2048 GPUs. For CosmoFlow, its best effort was 16.73 minutes using 512 CPUs and 1024 GPUs. In the weak scaling benchmark JUWELS used 1536 CPUs and 3072 GPUs to plow through DeepCAM at rate of 0.76 models per minute. Lawrence Berkeley National Laboratory used the Perlmutter supercomputer to conquer CosmoFlow in 8.5 minutes (256 CPUs and 1024 GPUs), DeepCAM in 2.51 minutes (512 CPUs and 2048 GPUs), and OpenCatalyst in 111.86 minutes (128 CPUs and 512 GPUs). It used 1280 CPUs and 5120 GPUs for the weak scaling effort, yielding 0.68 models per minute for CosmoFlow and 2.06 models per minute for DeepCAM. The (U.S.) National Center for Supercomputing Applications did its benchmarks on the Hardware Accelerated Learning (HAL) system. Using 32 CPUs and 64 GPUs they trained OpenCatalyst in 1021.18 minutes and DeepCAM in 133.91 minutes. Nvidia, which made the GPUs used in every entry except Riken's, tested its DGX A100 systems on CosmoFlow (8.04 minutes using 256 CPUs and 1024 GPUs) and DeepCAM (1.67 minutes with 512 CPUs and 2048 GPUs). In weak scaling the system was made up of 1024 CPUs and 4096 GPUs and it plowed through 0.73 CosmoFlow models per minute and 5.27 DeepCAM models per minute. Texas Advanced Computing Center's Frontera-Longhorn system tackled CosmoFlow in 140.45 minutes and DeepCAM in 76.9 minutes using 64 CPUs and 128 GPUs. Editor's note 1 Dec 2021: This post incorrectly defined exaflop as "one-thousand trillion 64-bit floating-point operations per second." It now correctly defines it as one-million trillion flops per second.

  • Bionic Hand Gives Amputees Sense of Touch
    by Joanna Goodrich on 19. Novembra 2021. at 19:00

    On a visit to Pakistan with his parents, 7-year-old Aadeel Akhtar met a girl his age who was missing her right leg. That was the first time he had met a person with a limb difference. The girl's family could not afford the cost of getting her a prosthetic leg, so she used a tree branch as a crutch to help her walk. From that encounter, Akhtar decided that one day he would develop affordable artificial limbs. Twenty-one years later, in 2015, the IEEE member founded Psyonic, which designs and builds advanced, affordable artificial limbs. Akhtar is the CEO. The startup, headquartered in Champaign, Ill., released its first product—the Ability Hand—in September. It is the fastest bionic hand on the market and the only one with touch feedback. The prosthesis uses pressure sensors to mimic the sensation of touch through vibrations. It functions almost like a regular hand. All five fingers on the lightweight prosthesis flex and extend. It offers 32 different grips. "The most important thing for us is to give people a functioning, robust prosthesis that allows them to do things they never thought they would be able to do again," Akhtar says. The Ability Hand is available in the United States for patients age 13 or older. MAKING PROSTHETIC LIMBS ACCESSIBLE Akhtar originally wanted to work with people with amputations as a physician. He earned a bachelor's degree in biology in 2007 from Loyola University in Chicago. But while pursuing his degree, he took a computer science course and fell in love with the subject. "I loved everything about engineering, programming, and building things," he says. "I wanted to figure out a way to combine my interests in both engineering and medicine." He went on to earn a master's degree in computer science in 2008, also from Loyola. Two years later he was accepted into the Medical Scholars Program at the University of Illinois at Urbana-Champaign. The program allows students to earn both an M.D. and a Ph.D. in tandem. Akhtar earned an additional master's degree in electrical and computer engineering and a doctorate in neuroscience in 2016 but has not completed his medical degree. His research for his doctorate focused on developing what eventually became the Ability Hand. In 2014 he and another graduate student, Mary Nguyen, partnered with the Range of Motion Project, a nonprofit that provides prosthetic devices to people around the world who can't afford them. Akhtar and Nguyen flew to Quito, Ecuador, to test their product on Juan Suquillo, who lost his left hand during a 1979 border war between Ecuador and Peru. "Everything that we do has the patient in mind." Using the prototype, Suquillo was able to pinch together his thumb and index finger for the first time in 35 years. He reported that he felt as though a part of him had come back thanks to the prosthesis. After that feedback, Akhtar said, he wanted "everyone to feel the same way that Juan did when using our prosthetic hand." Shortly after returning from that trip, Akhtar founded Psyonic. To get some advice about how to run the company and possibly win some money, he entered the bionic hand into the Cozad New Venture Challenge at the University of Illinois. The competition provides mentoring to teams, as well as workshops on topics such as pitching skills and customer development. Psyonic placed first and received a US $10,000 prize. The startup also won a $15,000 Samsung Research innovation prize in 2015. Since then, Psyonic has received funding from the University of Illinois Technology Entrepreneur Center, the iVenture Accelerator, and the U.S. National Science Foundation. The startup currently has 23 employees including engineers, public health experts, social workers, and doctors. DEVELOPING THE ABILITY HAND Psyonic's artificial hand weighs 500 grams, around the weight of an average adult hand. Most prosthetic hands weigh about 20 percent more, Akhtar says. The Ability Hand contains six motors housed in a carbon fiber casing. It has silicone fingers, a battery pack, and muscle sensors that are placed over the patient's residual limb. If the patient has an amputation below her elbow, for example, two muscle sensors would be placed over her intact forearm muscle. She would be able to use those sensors to control the hand's movement and grip. The Ability Hand is connected by Bluetooth to a smartphone app, which provides users another way to configure and control the hand's movements. The hand's software is automatically updated through the app. Its battery recharges in an hour, the company says. Akhtar working on the prosthetic handPSYONIC While talking to patients who used prosthetic hands, Akhtar says, they cited issues such as a lack of sensation and frequent breakage. To give patients a sense of touch, the Ability Hand contains pressure sensors on the index finger, pinky, and thumb. When a patient touches an item, he will feel vibrations on his skin that mimic the sensation of touch. The prosthesis uses those vibrations to alert the user when he touches an object as well as indicate how hard he has grabbed it and when he has let go. The reason most prosthetic limbs break, Akhtar says, is because they are made of rigid materials such as plastic, wood, or metal, which can't bend when they hit a hard surface. Psyonic uses rubber and silicone to make the fingers, which are flexible and can withstand a great deal of force, he says. ARM WRESTLING WITH A BIONIC HAND!?!?!?! To test the durability of the hand, Akhtar arm-wrestled Dan St. Pierre, 2018–2019 U.S. paratriathlon national champion. The Ability Hand is also water-resistant, Akhtar says. "Everything we do has the patient in mind," Akhtar says. "We want to improve the quality of life for people with limb differences as much as possible. Seeing the effect the Ability Hand has already had on people in such a short time span motivates us to keep going." Psyonic and its partners are researching how to improve the artificial hand. Akhtar says some of the partners, including the Ryan AbilityLab in Chicago and the University of Pittsburgh, are developing brain and spinal cord implants that could help patients control the prosthesis. The implants could stimulate the areas of the brain that control sensory intake. When a patient touches the prosthesis's fingers, the implants would send a signal to the brain that would make the patient feel the pressure. POSITIVE FEEDBACK Akhtar joined IEEE in 2010 when he was a doctoral student. He has presented papers on Psyonic's work at the IEEE/RSJ International Conference on Intelligent Robots and Systems and the IEEE International Conference on Robotics and Automation. IEEE provides a great "ecosystem," he says, on prosthetic limbs and robotics, and "it's amazing to be part of that community." He adds that having access to IEEE's community of scholars and professionals, some of whom are pioneers in the field, has helped the company gain important feedback on how it can improve the hand, as well as help in the development of legs in the future.

  • Video Friday: Dronut
    by Evan Ackerman on 19. Novembra 2021. at 17:06

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!): ICRA 2022 – May 23-27, 2022 – Philadelphia, PA, USA Let us know if you have suggestions for next week, and enjoy today's videos. We first met Cleo Robotics at CES 2017, when they were showing off a consumer prototype of their unique ducted-fan drone. They've just announced a new version which has been beefed up to do surveillance, and it is actually called the Dronut. For such a little thing, the 12 minute flight time is not the worst, and hopefully it'll find a unique niche that'll help Cleo move back towards the consumer market, because I want one. [ Cleo ] Happy tenth birthday, Thymio! [ EPFL ] Here we describe a protective strategy for winged drones that mitigates the added weight and drag by means of increased lift generation and stall delay at high angles of attack. The proposed structure is inspired by the wing system found in beetles and consists of adding an additional set of retractable wings, named elytra, which can rapidly encapsulate the main folding wings when protection is needed. [ EPFL ] This is some very, very impressive robust behavior on ANYmal, part of Joonho Lee's master's thesis at ETH Zurich. [ ETH Zurich ] NTT DOCOMO, INC. announced today that it has developed a blade-free, blimp-type drone equipped with a high-resolution video camera that captures high-quality video and full-color LED lights glow in radiant colors. [ NTT Docomo ] via [ Gizmodo ] Senior Software Engineer Daniel Piedrahita explains the theory behind robust dynamic stability and how Agility engineers used it to develop an unique and cohesive hardware and software solution that allows Digit to navigate unpredictable terrain with ease. [ Agility ] The title of thie video from DeepRobotics is "DOOMSDAY COMING." Best not to think about it, probably. [ DeepRobotics ] More Baymax! [ Disney ] At Ben-Gurion University of the Negev, they're trying to figure out how to make a COVID-19 officer robot authoritative enough that people will actually pay attention to it and do what it says. [ Paper ] Thanks, Andy! You'd think that high voltage powerlines would be the last thing you'd want a drone to futz with, but here we are. [ GRVC ] Cassie Blue navigates around furniture treated as obstacles in the atrium of the Ford Robotics Building at the University of Michigan. [ Michigan Robotics ] Northrop Grumman and its partners AVL, Intuitive Machines, Lunar Outpost and Michelin are designing a new vehicle that will greatly expand and enhance human and robotic exploration of the Moon, and ultimately, Mars. [ Northrop Grumman ] This letter proposes a novel design for a coaxial hexarotor (Y6) with a tilting mechanism that can morph midair while in a hover, changing the flight stage from a horizontal to a vertical orientation, and vice versa, thus allowing wall-perching and wall-climbing maneuvers. [ KAIST ] Honda and Black & Veatch have successfully tested the prototype Honda Autonomous Work Vehicle (AWV) at a construction site in New Mexico. During the month-long field test, the second-generation, fully-electric Honda AWV performed a range of functions at a large-scale solar energy construction project, including towing activities and transporting construction materials, water, and other supplies to pre-set destinations within the work site. [ Honda ] This could very well be the highest speed multiplier I've ever seen in a robotics video. [ GITAI ] Here's an interesting design for a manipulator that can do in-hand manipulation with a minimum of fuss, from the Yale Grablab. [ Paper ] That ugo robot that's just a ball with eyes on a stick is one of my favorite robots ever, because it's so unapologetically just a ball on a stick. [ ugo ] Robot, make me a sandwich. And then make me a bunch more sandwiches. [ Soft Robotics ] Refilling water bottles isn't a very complex task, but having a robot do it means that humans don't have to. [ Fraunhofer ] To help manufacturers find cost effective and sustainable alternatives to single -use plastic, ABB Robotics is collaborating with Zume, a global provider of innovative compostable packaging solutions. We will integrate and install up to 2000 robots at Zume customer's sites worldwide over the next five years to automate the innovative manufacturing production of 100 percent compostable packaging molded from sustainably harvested plant-based material for products from food and groceries to cosmetics and consumer goods. [ ABB ]