Homepage

IEEE News

  • How Efficient Is Your EV? It’s Complicated

    With motor sports moving hard and fast toward electric drive, a friend recently told me that electric drag-car racing just wouldn't be the same without the deafening roar of combustion engines and the smell of "nitro" fuel. She's 90 years old. I didn't feel like arguing with her. And she had a point: Carmakers and countries are announcing efforts to phase out fossil-fuel burning automobiles in the next two decades. So perhaps it isn't too early to start getting nostalgic about our gas guzzlers. One of the things I'll miss about combustion-engine cars is how easy it is to calculate their efficiency. You just take the distance the car can travel on a tank of fuel and compare it to the size of the tank. In the United States, we might be happy with a car that gets 25 miles per gallon, whereas in the rest of the world that same car would be said to have an efficiency of 9.4 liters per 100 kilometers. With electric vehicles, however, calculating efficiency is not so simple. Right now the measure used in the United States is MPGe, which stands for miles per gallon of gasoline-equivalent. It indicates how many miles the car can travel as 33.7 kilowatt-hours (121 megajoules) of energy is drained from its batteries. Why 33.7 kilowatt-hours? Because the U.S. Environmental Protection Agency decreed in 2010 that the amount of thermal energy released by burning one gallon of gasoline is equivalent to 33.7 kWh. For an engineer, a question is immediately obvious. Why are we using a constant based on the combustion of gasoline in order to measure the efficiency of an electric car? Well, as the automotive industry begins a long transition from combustion to electricity, there is arguably some utility in a metric that seems to offer a means of comparing the efficiency of an electric car with that of a conventional one. But more fundamentally, there is a huge disparity in efficiency overall. Commonly available EVs have combined city/highway MPGe ratings ranging from 76 for the Jaguar I-Pace to 134 for the Tesla Model 3. That range is about four times as high as the span, in miles per gallon, associated with common combustion-engine cars. The efficiencies are so different because internal-combustion engines convert a lot of energy to heat rather than to mechanical torque. Why are we using a constant based on the combustion of gasoline in order to measure the efficiency of an electric car? But the efficiency equation is more complicated than that because, unlike petrol pumps, recharging stations do not transfer energy perfectly. Depending on factors like the ambient-air temperature, how empty the battery is when you start charging, and the supply voltage to your EV's charging unit, the efficiency of charging can vary between 70 percent and 90 percent. I expect that soon all chargers will communicate digitally with a car as they charge it. The car will then know how much electricity was consumed, along with how much of that its batteries managed to store. The car will be able to display long-term data on charging, including how well it charges under different conditions. Some people will use this information to adapt their choices about where and when to recharge. For them, recharge efficiency might be just as important as MPGe rating. Many of us will also want to know whether we are treading lightly on our planet. We'll be interested in how much carbon-based fuels such as coal and natural gas went into the electrical energy that charged our batteries. This environmental info might even take into account the pollution that came from building the solar arrays or wind farms that supplied renewable energy. And we'll eventually want to know how environmentally friendly that battery chemistry is for any car we buy. Of course, motor-sports enthusiasts probably won't spend a lot of time fretting about environmental impacts. The drag cars that inspired this column typically race a few hundred meters in just a few seconds, during which time their fuel efficiency is a dismal 0.011 miles per gallon. Electric dragsters will undoubtedly improve on that. To perhaps 0.04 MPGe. Personally, I won't miss the smell of nitromethane at all.

  • How Robots Helped Out After the Surfside Condo Collapse

    Editor's Note: Along with Robin Murphy, the authors of this article include David Merrick, Justin Adams, Jarrett Broder, Austin Bush, Laura Hart, and Rayne Hawkins. This team is with the Florida State University's Disaster Incident Response Team, which was in Surfside for 24 days at the request of Florida US&R Task 1 (Miami Dade Fire Rescue Department). On June 24, 2021, at 1:25AM portions of the 12 story Champlain Towers South condominium in Surfside, Florida collapsed, killing 98 people and injuring 11, making it the third largest fatal collapse in US history. The life-saving and mitigation Response Phase, the phase where responders from local, state, and federal agencies searched for survivors, spanned June 24 to July 7, 2021. This article summarizes what is known about the use of robots at Champlain Towers South, and offers insights into challenges for unmanned systems. Small unmanned aerial systems (drones) were used immediately upon arrival by the Miami Dade Fire Rescue (MDFR) Department to survey the roughly 2.68 acre affected area. Drones, such as the DJI Mavic Enterprise Dual with a spotlight payload and thermal imaging, flew in the dark to determine the scope of the collapse and search for survivors. Regional and state emergency management drone teams were requested later that day to supplement the effort of flying day and night for tactical life-saving operations and to add flights for strategic operations to support managing the overall response. View of a Phantom 4 Pro in use for mapping the collapse on July 2, 2021. Two other drones were also in the airspace conducting other missions but not visible. Photo: Robin R. Murphy The teams brought at least 9 models of rotorcraft drones, including the DJI Mavic 2 Enterprise Dual, Mavic 2 Enterprise Advanced, DJI Mavic 2 Zoom, DJI Mavic Mini, DJI Phantom 4 Pro, DJI Matrice 210, Autel Dragonfish, and Autel EVO II Pro plus a tethered Fotokite drone. The picture above shows a DJI Phantom 4 Pro in use, with one of the multiple cranes in use on the site visible. The number of flights for tactical operations were not recorded, but drones were flown for 304 missions for strategic operations alone, making the Surfside collapse the largest and longest use of drones recorded for a disaster, exceeding the records set by Hurricane Harvey (112) and Hurricane Florence (260). Unmanned ground bomb squad robots were reportedly used on at least two occasions in the standing portion of the structure during the response, once to investigate and document the garage and once on July 9 to hold a repeater for a drone flying in the standing portion of the garage. Note that details about the ground robots are not yet available and there may have been more missions, though not on the order of magnitude of the drone use. Bomb squad robots tend to be too large for use in areas other than the standing portions of the collapse. We concentrate on the use of the drones for tactical and strategic operations, as the authors were directly involved in those operations. It offers a preliminary analysis of the lessons learned. The full details of the response will not be available for many months due to the nature of an active investigation into the causes of the collapse and due to privacy of the victims and their families. Drone Use for Tactical Operations Tactical operations were carried out primarily by MDFR with other drone teams supporting when necessary to meet the workload. Drones were first used by the MDFR drone team, which arrived within minutes of the collapse as part of the escalating calls. The drone effort started with night operations for direct life-saving and mitigation activities. Small DJI Mavic 2 Enterprise Dual drones with thermal camera and spotlight payloads were used for general situation awareness to help responders understand the extent of the collapse beyond what could be seen from the street side. The built-in thermal imager was used but did not have the resolution and was unable to show details as much of the material was the same temperature and heat emissions were fuzzy. The spotlight with the standard visible light camera was more effective, though the view was constricted. The drones were also used to look for survivors or trapped victims, help determine safety hazards to responders, and provide task force leaders with overwatch of the responders. During daylight, DJI Mavic Zoom drones were added because of their higher camera resolution zoom. When fires started in the rubble, drones with a streaming connection to bucket truck operators were used to help optimize position of water. Drones were also used to locate civilians entering the restricted area or flying drones to taking pictures. In a novel use of drones for physical interaction, MDFR squads flew drones to attempt to find and pick up items in the standing portion of the structure with immense value to survivors. As the response evolved, the use of drones was expanded to missions where the drones would fly in close proximity to structures and objects, fly indoors, and physically interact with the environment. For example, drones were used to read license plates to help identify residents, search for pets, and document belongings inside parts of the standing structure for families. In a novel use of drones for physical interaction, MDFR squads flew drones to attempt to find and pick up items in the standing portion of the structure with immense value to survivors. Before the demolition of the standing portion of the tower, MDFR used a drone to remove an American flag that had been placed on the structure during the initial search. Drone Use for Strategic Operations An orthomosiac of the collapse constructed from imagery collected by a drone on July 1, 2021. Strategic operations were carried out by the Disaster Incident Research Team (DIRT) from the Florida State University Center for Disaster Risk Policy. The DIRT team is a state of Florida asset and was requested by Florida Task Force 1 when it was activated to assist later on June 24. FSU supported tactical operations but was solely responsible for collecting and processing imagery for use in managing the response. This data was primarily orthomosiac maps (a single high resolution image of the collapse created from stitching together individual high resolution imagers, as in the image above) and digital elevation maps (created from structure from motion, below). Digital elevation map constructed from imagery collected by a drone on 27 June, 2021.Photo: Robin R. Murphy These maps were collected every two to four hours during daylight, with FSU flying an average of 15.75 missions per day for the first two weeks of the response. The latest orthomosaic maps were downloaded at the start of a shift by the tactical responders for use as base maps on their mobile devices. In addition, a 3D reconstruction of the state of the collapse on July 4 was flown the afternoon before the standing portion was demolished, shown below. GeoCam 3D reconstruction of the collapse on July 4, 2021. Photo: Robin R. Murphy The mapping functions are notable because they require specialized software for data collection and post-processing, plus the speed of post-processing software relied on wireless connectivity. In order to stitch and fuse images without gaps or major misalignments, dedicated software packages are used to generate flight paths and autonomously fly and trigger image capture with sufficient coverage of the collapse and overlap between images. Coordination of Drones on Site The aerial assets were loosely coordinated through social media. All drones teams and Federal Aviation Administration (FAA) officials shared a WhatsApp group chat managed by MDFR. WhatsApp offered ease of use, compatibility with everyone's smartphones and mobile devices, and ease of adding pilots. Ease of adding pilots was important because many were not from MDFR and thus would not be in any personnel-oriented coordination system. The pilots did not have physical meetings or briefings as a whole, though the tactical and strategic operations teams did share a common space (nicknamed "Drone Zone") while the National Institute of Standards and Technology teams worked from a separate staging location. If a pilot was approved by MDFR drone captain who served as the "air boss," they were invited to the WhatsApp group chat and could then begin flying immediately without physically meeting the other pilots. The teams flew concurrently and independently without rigid, pre-specified altitude or area restrictions. One team would post that they were taking off to fly at what area of the collapse and at what altitude and then post when they landed. The easiest solution was for the pilots to be aware of each others' drones and adjust their missions, pause, or temporarily defer flights. If a pilot forgot to post, someone would send a teasing chat eliciting a rapid apology. Incursions by civilian manned and unmanned aircraft in the restricted airspace did occur. If FAA observers or other pilots saw a drone flying that was not accounted for in the chat, i.e., that five drones were visible over the area but only four were posted, or if a drone pilot saw a drone in an unexpected area, they would post a query asking if someone had forgotten to post or update a flight. If the drone remained unaccounted for, the FAA would assume that a civilian drone had violated the temporary flight restrictions and search the surrounding area for the offending pilot. Preliminary Lessons Learned While the drone data and performance is still being analyzed, some lessons learned have emerged that may be of value to the robotics, AI, and engineering communities. Tactical and strategic operations during the response phase favored small, inexpensive, easy to carry platforms with cameras supporting coarse structure from motion rather than larger, more expensive lidar systems. The added accuracy of lidar systems was not needed for those missions, though the greater accuracy and resolution of such systems were valuable for the forensic structural analysis. For tactical and strategic operations, the benefits of lidar was not worth the capital costs and logistical burden. Indeed, general purpose consumer/prosumer drones that could fly day or night, indoors and outdoors, and for both mapping and first person view missions were highly preferred over specialized drones. The reliability of a drone was another major factor in choosing a specific model to field, again favoring consumer/prosumer drones as they typically have hundreds of thousand hours of flight time more than specialized or novel drones. Tethered drones offer some advantages for overwatch but many tactical operations missions require a great deal of mobility. Strategic mapping necessitates flying directly over the entire area being mapped. While small, inexpensive general purpose drones offered many advantages, they could be further improved for flying at night and indoors. A wider area of lighting would be helpful. A 360 degree (spherical) area of coverage for obstacle avoidance for working indoors or at low altitudes and close proximity to irregular work envelopes and near people, especially as night, would also be useful. Systems such as the Flyability ELIOS 2 are designed to fly in narrow and highly cluttered indoor areas, but no models were available for the immediate response. Drone camera systems need to be able to look straight up to inspect the underside of structures or ceilings. Mechanisms for determining the accurate GPS location of a pixel in an image, not just the GPS location of the drone, is becoming increasing desirable. Other technologies could be of benefit to the enterprise but face challenges. Computer vision/machine learning (CV/ML) for searching for victims in rubble is often mentioned as a possible goal, but a search for victims who are not on the surface of the collapse is not visually directed. The portions of victims that are not covered by rubble are usually camouflaged with gray dust, so searches tend to favor canines using scent. Another challenge for CV/ML methods is the lack of access to training data. Privacy and ethical concerns poses barriers to the research community gaining access to imagery with victims in the rubble, but simulations may not have sufficient fidelity. The collapse supplies motivation for how informatics research and human-computer interaction and human-robot interaction can contribute to the effective use of robots during a disaster, and illustrates that a response does not follow a strictly centralized, hierarchical command structure and the agencies and members of the response are not known in advance. Proposed systems must be flexible, robust, and easy to use. Furthermore, it is not clear that responders will accept a totally new software app versus making do with a general purpose app such as WhatsApp that the majority routinely use for other purposes. The biggest lesson learned is that robots are helpful and warrant more investment, particular as many US states are proposing to terminate purchase of the very models of drones that were so effective over cybersecurity concerns. However, the biggest lesson learned is that robots are helpful and warrant more investment, particular as many US states are proposing to terminate purchase of the very models of drones that were so effective over cybersecurity concerns. There remains much to work to be done by researchers, manufacturers, and emergency management to make these critical technologies more useful for extreme environments. Our current work is focusing on creating open source datasets and documentation and conducting a more thorough analysis to accelerate the process. Value of Drones The pervasive use of the drones indicates their implicit value to responding to, and documenting, the disaster. It is difficult to quantify the impact of drones, similar to the difficulties in quantifying the impact of a fire truck on firefighting or the use of mobile devices in general. Simply put, drones would not have been used beyond a few flights if they were not valuable. The impact of the drones on tactical operations was immediate, as upon arrival MDFR flew drones to assess the extent of the collapse. Lighting on fire trucks primarily illuminated the street side of the standing portion of the building, while the drones, unrestricted by streets or debris, quickly expanded situation awareness of the disaster. The drones were used optimize placement of water to suppress the fires in the debris. The impact of the use of drones for other tactical activities is harder to quantify, but the frequent flights and pilots remaining on stand-by 24/7 indicate their value. The impact of the drones on strategic operations was also considerable. The data collected by the drones and then processed into 2D maps and 3D models became a critical part of the US&R operations as well as one part of the nascent investigation into why the building failed. During initial operations, DIRT provided 2D maps to the US&R teams four times per day. These maps became the base layers for the mobile apps used on the pile to mark the locations of human remains, structural members of the building, personal effects, or other identifiable information. Updated orthophotos were critical to the accuracy of these reports. These apps running on mobile devices suffered from GPS accuracy issues, often with errors as high as ten meters. By having base imagery that was only hours old, mobile app users where able to 'drag the pin' on the mobile app to a more accurate report location on the pile - all by visualizing where they were standing compared to fresh UAS imagery. Without this capability, none of the GPS field data would be of use to US&R or investigators looking at why the structural collapse occurred. In addition to serving a base layer on mobile applications, the updated map imagery was used in all tactical, operational, and strategic dashboards by the individual US&R teams as well as the FEMA US&R Incident Support Team (IST) on site to assist in the management of the incident. Aside from the 2D maps and orthophotos, 3D models were created from the drone data and used by structural experts to plan operations, including identifying areas with high probabilities of finding survivors or victims. Three-dimensional data created through post-processing also supported the demand for up-to-date volumetric estimates - how much material was being removed from the pile, and how much remained. These metrics provided clear indications of progress throughout the operations. Acknowledgments Portions of this work were supported by NSF grants IIS-1945105 and CMMI- 2140451. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. The authors express their sincere condolences to the families of the victims.

  • This Camera Can "See" the Bigger Picture

    Imagine being able to see fast-moving objects under poor lighting conditions – all with a wider angle of view. Us humans will have to make do with the vision system we've evolved, but computer vision is always reaching new limits. In a recent advancement, a research team in South Korea has combined two different types of cameras in order to better track fast-moving objects and create 3-D maps of challenging environments. The first type of camera used in this new design is an event-based camera, which excels at capturing fast-moving objects. The second is an omni-directional (or fisheye) camera, which captures very wide angles. Kuk-Jin Yoon, an associate professor at the Visual Intelligence Lab at the Korea Advanced Institute of Science and Technology, notes that both camera types offer advantages that are desirable for computer vision. "The event camera has much less latency, less motion blur, much higher dynamic range, and excellent power efficiency," he says. "On the other hand, omni-directional cameras (cameras with fisheye lenses) allow us to get visual information from much wider views." His team sought to combine these approaches in a new design called event-based omnidirectional multi-view stereo (EOMVS). In terms of hardware, this means incorporating a fisheye lens with an event-based camera. The new system uses an omnidirectional event camera setup consisting of a DVXplorer event camera (rear) and an Entaniya Fisheye lens (front). Korea Advanced Institute of Science and Technology Next, software is needed to reconstruct 3D scenes with high accuracy. An approach that's commonly used involves taking multiple images from different camera angles in order to reconstruct 3D information. Yoon and his colleagues used a similar approach, but rather than using images, the new EOMVS approach reconstructs 3D spaces using event data captured by the modified event camera. The researchers tested their design against LiDAR measurements, which are known to be highly accurate for mapping out 3D spaces. The results were published July 9 in IEEE Robotics and Automation Letters. In their paper, the authors note that they believe this work is the first attempt to set up and calibrate an omnidirectional event camera and use it to solve a vision task. They tested the system with the field of view set at 145°, 160°, and 180°. The endeavor was largely a success. Yoon notes that the EOMVS system was very accurate at mapping out 3D spaces, with an error rate of 3 percent. The approach also proved to meet all the desirable features expected with such a combination of cameras. "[Using EOMVS] we can detect and track very fasting moving objects under very severe illumination without losing them in the field of view," says Yoon. "In that sense, 3D mapping with drones can be the most promising real-world application of the EOMVS." Along with testing EOMVS in a real-world setting, Yoon et al. also tested the camera system in simulated environments using the 3D computer graphics software Blender. Korea Advanced Institute of Science and Technology He says his team is planning to commercialize this new design, as well as build upon it. The current EOMVS design requires knowledge of where the camera is positioned in order to piece together and analyze the data. But the researchers are interested in devising a more flexible design, where the exact position of the camera does not need to be known beforehand. To achieve this, they aim to incorporate an algorithm that estimates the positions of the camera as it moves.

About Us

The Department of Telecommunications was founded in 1976, and it is one of four departments of the Faculty of Electrical Engineering, University of Sarajevo. Since 2005, study programs have been harmonized with the Bologna Declaration, and are divided into Bachelor, Master and Doctoral Studies.

The bachelor’s study program, with a duration of three years, is oriented towards fundamentals of engineering practice and telecommunication knowledge, and it is accredited by ASIIN – the German member of European Quality Assurance Association in Higher Education (ENQA). On the other hand, the master’s study program with a duration of two years is oriented towards practical engineering work and scientific-research activities.

More details