IEEE Spectrum
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA 2026: 1–5 June 2026, VIENNARSS 2026: 13–17 July 2026, SYDNEYSummer School on Multi-Robot Systems: 29 July–4 August 2026, PRAGUEEnjoy today’s videos! “Roadrunner” is a new bipedal wheeled robot prototype designed for multi-modal locomotion. It weighs around 15 kg (33 lb) and can seamlessly switch between its side-by-side and in-line wheel modes and stepping configurations depending on what is required for navigating its environment. The robot’s legs are entirely symmetric, allowing it to point its knees forward or backward, which can be used to avoid obstacles or manage specific movements. A single control policy was trained to handle both side-by-side and in-line driving. Several behaviors, including standing up from various ground configurations and balancing on one wheel, were successfully deployed zero-shot on the hardware.[ Robotics and AI Institute ]Incredibly (INCREDIBLY!) NASA says that this is actually happening.NASA’s SkyFall mission will build on the success of the Ingenuity Mars helicopter, which achieved the first powered, controlled flight on another planet. Using a daring mid-air deployment, SkyFall will deliver a team of next-gen Mars helicopters to scout human landing sites and map subsurface water ice.[ NASA ]NASA’s MoonFall mission will blaze a path for future Artemis missions by sending four highly mobile drones to survey the lunar surface around the Moon’s South Pole ahead of astronauts’ arrival there. MoonFall is built on the legacy of NASA’s Ingenuity Mars Helicopter. The drones will be launched together and released during descent to the surface. They will land and operate independently over the course of a lunar day (14 Earth days) and will be able to explore hard-to-reach areas, including permanently shadowed regions (PSRs), surveying terrain with high-definition optical cameras and other potential instruments.For what it’s worth, Moon landings have a success rate well under 50%. So let’s send some robots there to land over and over![ NASA ]In Science Robotics, researchers from the Tangible Media group led by Professor Hiroshi Ishii, together with colleagues from Politecnico di Bari, present Electrofluidic Fiber Muscles: a new class of artificial muscle fibers for robots and wearables. Unlike the rigid servo motors used in most robots, these fiber-shaped muscles are soft and flexible. They combine electrohydrodynamic (EHD) fiber pumps — slender tubes that move liquid using electric fields to generate pressure silently, with no moving parts — with fluid-filled fiber actuators. These artificial muscles could enable more agile untethered robots, as well as wearable assistive systems with compact actuation integrated directly into textiles.[ MIT Media Lab ]In this study, we developed MEVIUS2, an open-source quadruped robot. It is comparable in size to Boston Dynamics Spot, equipped with two LiDARs and a C1 camera, and can freely climb stairs and steep slopes! All hardware, software, and learning environments are released as open source.[ MEVIUS2 ]Thanks, Kento!What goes into preparing for a live performance? Arun highlights the reliability testing that goes into trying a new behavior for Spot.[ Boston Dynamics ]In this work, a multi-robot planning and control framework is presented and demonstrated with a team of 40 indoor robots, including both ground and aerial robots.That soundtrack though.[ GitHub ]Thanks, Keisuke!Quadrupedal robots can navigate cluttered environments like their animal counterparts, but their floating-base configuration makes them vulnerable to real-world uncertainties. Controllers that rely only on proprioception (body sensing) must physically collide with obstacles to detect them. Those that add exteroception (vision) need precisely modeled terrain maps that are hard to maintain in the wild. DreamWaQ++ bridges this gap by fusing both modalities through a resilient multi-modal reinforcement learning framework. The result: a single controller that handles rough terrains, steep slopes, and high-rise stairs—while gracefully recovering from sensor failures and situations it has never seen before.That cliff behavior is slightly uncanny.[ DreamWaQ++ ]I take issue with this from iRobot:While the pyramid exploration that iRobot did was very cool, they did it with a custom made robot designed for a very specific environment. Cleaning your floors is way, way harder. Here’s a bit more detail on the pyramids thing:[ iRobot ]More robots in circus please![ Daniel Simu ]MIT engineers have designed a wristband that lets wearers control a robotic hand with their own movements. By moving their hands and fingers, users can direct a robot to perform specific tasks, or they can manipulate objects in a virtual environment with high-dexterity control.[ MIT ]At NVIDIA GTC 2026, we showcased how AI is moving into the physical world. Visitors interacted with robots using voice commands, watching them interpret intent and act in real time — powered by our KinetIQ AI brain.[ Humanoid ]Props to Sony for their continued support and updates for Aibo![ Aibo ]This robot looks like it could be a little curvier than normal?[ LimX Dynamics ]Developed by Zhejiang Humanoid Robot Innovation Center Co., Ltd., the Naviai Robot is an intelligent cooking device. It can autonomously process ingredients, perform cooking tasks with high accuracy, adjust smart kitchen equipment in real time, and complete post-cooking cleaning. Equipped with multi-modal perception technology, it adapts to daily kitchen environments and ensures safe and stable operation.That 7x is doing some heavy lifting.[ Zhejiang Lab ]This CMU RI Seminar is by Hadas Kress-Gazit from Cornell, on “Formal Methods for Robotics in the Age of Big Data.”Formal methods – mathematical techniques for describing systems, capturing requirements, and providing guarantees – have been used to synthesize robot control from high-level specification, and to verify robot behavior. Given the recent advances in robot learning and data-driven models, what role can, and should, formal methods play in advancing robotics? In this talk I will give a few examples for what we can do with formal methods, discuss their promise and challenges, and describe the synergies I see with data-driven approaches.[ Carnegie Mellon University Robotics Institute ]
IEEE Spectrum
When you hear the term humanoid robot, you may think of C-3PO, the human-cyborg-relations android from Star Wars. C-3PO was designed to assist humans in communicating with robots and alien species. The droid, which first appeared on screen in 1977, joined the characters on their adventures, walking, talking, and interacting with the environment like a human. It was ahead of its time.Before the release of Star Wars, a few androids did exist and could move and interact with their environment, but none could do so without losing its balance.It wasn’t until 1996 that the first autonomous robot capable of walking without falling was developed in Japan. Honda’s Prototype 2 (P2) was nearly 183 centimeters tall and weighed 210 kilograms. It could control its posture to maintain balance, and it could move multiple joints simultaneously.In recognition of that decades-old feat, P2 has been honored as an IEEE Milestone. The dedication ceremony is scheduled for 28 April at the Honda Collection Hall, located on the grounds of the Mobility Resort Motegi, in Japan. The machine is on display in the hall’s robotics exhibit, which showcases the evolution of Honda’s humanoid technology.In support of the Milestone nomination, members of the IEEE Nagoya (Japan) Section wrote: “This milestone demonstrated the feasibility of humanlike locomotion in machines, setting a new standard in robotics.” The Milestone proposal is available on the Engineering Technology and History Wiki.Developing a domestic androidIn 1986 Honda researchers Kazuo Hirai, Masato Hirose, Yuji Haikawa, and Toru Takenaka set out to develop what they called a “domestic robot” to collaborate with humans. It would be able to climb stairs, remove impediments in its path, and tighten a nut with a wrench, according to their research paper on the project.“We believe that a robot working within a household is the type of robot that consumers may find useful,” the authors wrote.But to create a machine that would do household chores, it had to be able to move around obstacles such as furniture, stairs, and doorways. It needed to autonomously walk and read its environment like a human, according to the researchers.But no robot could do that at the time. The closest technologists got was the WABOT-1. Built in 1973 at Waseda University, in Tokyo, the WABOT had eyes and ears, could speak Japanese, and used tactile sensors embedded on its hands as it gripped and moved objects. Although the WABOT could walk, albeit unsteadily, it couldn’t maneuver around obstacles or maintain its balance. It was powered by an external battery and computer.To build an android, the Honda team began by analyzing how people move, using themselves as models.That led to specifications for the robot that gave it humanlike dimensions, including the location of the leg joints and how far the legs could rotate.Once they began building the machine, though, the engineers found it difficult to satisfy every specification. Adjustments were made to the number of joints in the robot’s hips, knees, and ankles, according to the research paper. Humans have four hip, two knee, and three ankle joints; P2’s predecessor had three hip, one knee, and two ankle joints. The arms were treated similarly. A human’s four shoulder and three elbow joints became three shoulder joints and one elbow joint in the robot.The researchers installed existing Honda motors and hydraulics in the hips, knees, and ankles to enable the robot to walk. Each joint was operated by a DC motor with a harmonic-drive reduction gear system, which is compact and offered high torque capacity.To test their ideas, the engineers built what they called E0. The robot, which was just a pair of connected legs, successfully walked. It took about 15 seconds to take each step, however, and it moved using static walking in a straight line, according to a post about the project on Honda’s website. (Static walking is when the body’s center of mass is always within the foot’s sole. Humans walk with their center of mass below their navel.)The researchers created several algorithms to enable the robot to walk like a human, according to the Honda website. The codes allowed the robot to use a locomotion mechanism, dynamic walking, whereby the robot stays upright by constantly moving and adjusting its balance, rather than keeping its center of mass over its feet, according to a video on the YouTube channel Everything About Robotics Explained.“P2 was not just a technical achievement; it was a catalyst that propelled the field of humanoid robotics forward, demonstrating the potential for robots to interact with and assist humans in meaningful ways.” —IEEE Nagoya SectionThe Honda team installed rubber brushes on the bottom of the machine’s feet to reduce vibrations from the landing impacts (the force experienced when its feet touch the ground)—which had made the robot lose its balance.Between 1987 and 1991, three more prototypes (E1, E2, and E3) were built, each testing a new algorithm. E3 was a success.With the dynamic walking mechanism complete, the researchers continued their quest to make the robot stable. The team added 6-axis sensors to detect the force at which the ground pushed back against the robot’s feet and the movements of each foot and ankle, allowing the robot to adjust its gait in real time for stability.The team also developed a posture-stabilizing control system to help the robot stay upright. A local controller directed how the electric motor actuators needed to move so the robot could follow the leg joint angles when walking, according to the research paper.During the next three years, the team tested the systems and built three more prototypes (E4, E5, and E6), which had boxlike torsos atop the legs.In 1993 the team was finally ready to build an android with arms and a head that looked more like C-3PO, dubbed Prototype 1 (P1). Because the machine was meant to help people at home, the researchers determined its height and limb proportions based on the typical measurements of doorways and stairs. The arm length was based on the ability of the robot to pick up an object when squatting.When they finished building P1, it was 191.5 cm tall, weighed 175 kg, and used an external power source and computer. It could turn a switch on and off, grab a doorknob, and carry a 70 kg object.P1 was not launched publicly but instead used to conduct research on how to further improve the design. The engineers looked at how to install an internal power source and computer, for example, as well as how to coordinate the movement of the arms and legs, according to Honda.For P2, four video cameras were installed in its head—two for vision processing and the other two for remote operation. The head was 60 cm wide and connected to the torso, which was 75.6 cm deep.A computer with four microSparc II processors running a real-time operating system was added into the robot’s torso. The processors were used to control the arms, legs, joints, and vision-processing cameras.Also within the body were DC servo amplifiers, a 20-kg nickel-zinc battery, and a wireless Ethernet modem, according to the research paper. The battery lasted for about 15 minutes; the machine also could be charged by an external power supply.The hardware was enclosed in white-and-gray casing.P2, which was launched publicly in 1996, could walk freely, climb up and down stairs, push carts, and perform some actions wirelessly. P2, which was launched publicly in 1996, could walk freely, climb up and down stairs, push carts, and perform some actions wirelessly.King Rose ArchivesThe following year, Honda’s engineers released the smaller and lighter P3. It was 160 cm tall and weighed 130 kg.In 2000 the popular ASIMO robot was introduced. Although shorter than its predecessors at 130 cm, it could walk, run, climb stairs, and recognize voices and faces. The most recent version was released in 2011. Honda has retired the robot.Honda P2’s influenceThanks to P2, today’s androids are not just ideas in a laboratory. Robots have been deployed to work in factories and, increasingly, at home.The machines are even being used for entertainment. During this year’s Spring Festival gala in Beijing, machines developed by Chinese startups Unitree Robotics, Galbot, Noetix, and MagicLab performed synchronized dances, martial arts, and backflips alongside human performers.“P2’s development shifted the focus of robotics from industrial applications to human-centric designs,” the Milestone sponsors explained in the wiki entry. “It inspired subsequent advancements in humanoid robots and influenced research in fields like biomechanics and artificial intelligence.“It was not just a technical achievement; it was a catalyst that propelled the field of humanoid robotics forward, demonstrating the potential for robots to interact with and assist humans in meaningful ways.”To learn more about robots, check out IEEE Spectrum’s guide.Recognition as an IEEE MilestoneA plaque recognizing Honda’s P2 robot as an IEEE Milestone is to be installed at the Honda Collection Hall. The plaque is to read:In 1996 Prototype 2 (P2), a self-contained autonomous bipedal humanoid robot capable of stable dynamic walking and stair-climbing, was introduced by Honda. Its legged robotics incorporated real-time posture control, dynamic balance, gait generation, and multijoint coordination. Honda’s mechatronics and control algorithms set technical benchmarks in mobility, autonomy, and human-robot interaction. P2 inspired new research in humanoid robot development, leading to increasingly sophisticated successors.Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world.
IEEE Spectrum
WHEN KYIV-BORN ENGINEER Yaroslav Azhnyuk thinks about the future, his mind conjures up dystopian images. He talks about “swarms of autonomous drones carrying other autonomous drones to protect them against autonomous drones, which are trying to intercept them, controlled by AI agents overseen by a human general somewhere.” He also imagines flotillas of autonomous submarines, each carrying hundreds of drones, suddenly emerging off the coast of California or Great Britain and discharging their cargoes en masse to the sky.“How do you protect from that?” he asks as we speak in late December 2025; me at my quiet home office in London, he in Kyiv, which is bracing for another wave of missile attacks.Azhnyuk is not an alarmist. He cofounded and was formerly CEO of Petcube, a California-based company that uses smart cameras and an app to let pet owners keep an eye on their beloved creatures left alone at home. A self-described “liberal guy who didn’t even receive military training,” Azhnyuk changed his mind about developing military tech in the months following the Russian invasion of Ukraine in February 2022. By 2023, he had relinquished his CEO role at Petcube to do what many Ukrainian technologists have done—to help defend his country against a mightier aggressor.It took a while for him to figure out what, exactly, he should be doing. He didn’t join the military, but through friends on the front line, he witnessed how, out of desperation, Ukrainian troops turned to off-the-shelf consumer drones to make up for their country’s lack of artillery.Ukrainian troops first began using drones for battlefield surveillance, but within a few months they figured out how to strap explosives onto them and turn them into effective, low-cost killing machines. Little did they know they were fomenting a revolution in warfare. The Ukrainian robotics company The Fourth Law produces an autonomy module [above] that uses optics and AI to guide a drone to its target. Yaroslav Azhnyuk [top, in light shirt], founder and CEO of The Fourth Law, describes a developmental drone with autonomous capabilities to Ukrainian President Volodymyr Zelenskyy and German Chancellor Olaf Scholz.Top: THE PRESIDENTIAL OFFICE OF UKRAINE; Bottom: THE FOURTH LAWThat revolution was on display last month, as the U.S. and Israel went to war with Iran. It soon became clear that attack drones are being extensively used by both sides. Iran, for example, is relying heavily on the Shahed drones that the country invented and that are now also being manufactured in Russia and launched by the thousands every month against Ukraine.A thorough analysis of the Middle East conflict will take some time to emerge. And so to understand the direction of this new way of war, look to Ukraine, where its next phase—autonomy—is already starting to come into view. Outnumbered by the Russians and facing increasingly sophisticated jamming and spoofing aimed at causing the drones to veer off course or fall out of the sky, Ukrainian technologists realized as early as 2023 that what could really win the war was autonomy. Autonomous operation means a drone isn’t being flown by a remote pilot, and therefore there’s no communications link to that pilot that can be severed or spoofed, rendering the drone useless.By late 2023, Azhnyuk set out to help make that vision a reality. He founded two companies, The Fourth Law and Odd Systems, the first to develop AI algorithms to help drones overcome jamming during final approach, the second to build thermal cameras to help those drones better sense their surroundings.“I moved from making devices that throw treats to dogs to making devices that throw explosives on Russian occupants,” Azhnyuk quips.Since then, The Fourth Law has dispatched “more than thousands” of autonomy modules to troops in eastern Ukraine (it declines to give a more specific figure), which can be retrofitted on existing drones to take over navigation during the final approach to the target. Azhnyuk says the autonomy modules, worth around US $50, increase the drone-strike success rate by up to four times that of purely operator-controlled drones.And that is just the beginning. Azhnyuk is one of thousands of developers, including some who relocated from Western countries, who are applying their skills and other resources to advancing the drone technology that is the defining characteristic of the war in Ukraine. This eclectic group of startups and founders includes Eric Schmidt, the former Google CEO, whose company Swift Beat is churning out autonomous drones and modules for Ukrainian forces. The frenetic pace of tech development is helping a scrappy, innovative underdog hold at bay a much larger and better-equipped foe.All of this development is careening toward AI-based systems that enable drones to navigate by recognizing features in the terrain, lock on to and chase targets without an operator’s guidance, and eventually exchange information with each other through mesh networks, forming self-organizing robotic kamikaze swarms. Such an attack swarm would be commanded by a single operator from a safe distance.According to some reports, autonomous swarming technology is also being developed for sea drones. Ukraine has had some notable successes with sea drones, which have reportedly destroyed or damaged around a dozen Russian vessels. The Skynode X system, from Auterion, provides a degree of autonomy to a drone.AUTERIONFor Ukraine, swarming can solve a major problem that puts the nation at a disadvantage against Russia—the lack of personnel. Autonomy is “the single most impactful defense technology of this century,” says Azhnyuk. “The moment this happens, you shift from a manpower challenge to a production challenge, which is much more manageable,” he adds.The autonomous warfare future envisioned by Azhnyuk and others is not yet a reality. But Marc Lange, a German defense analyst and business strategist, believes that “an inflection point” is already in view. Beyond it, “things will be so dramatically different,” he says.“Ukraine pretty rapidly realized that if the operator-to-drone ratio can be shifted from one-to-one to one-to-many, that creates great economies of scale and an amazing cost exchange ratio,” Lange adds. “The moment one operator can launch 100, 50, or even just 20 drones at once, this completely changes the economics of the war.”Drones With a View For a while, jammers that sever the radio links between drones and operators or that spoof GPS receivers were able to provide fairly reliable defense against human-controlled FPVs. But as autonomous navigation progressed, those electronic shields have gradually become less effective. Defenders must now contend with unjammable drones—ones that are attached to hair-thin optical fibers or that are capable of finding their way to their targets without external guidance. In this emerging struggle, the defenders’ track records aren’t very encouraging: The typical countermeasure is to try to shoot down the attacking drone with a service weapon. It’s rarely successful. A truck outfitted with signal-jamming gear drives under antidrone nets near Oleksandriya, in eastern Ukraine, on 2 October 2025.ED JONES/AFP/GETTY IMAGES“The attackers gain an immense advantage from unmanned systems,” says Lange. “You can have a drone pop up from anywhere and it can wreak havoc. But from autonomy, they gain even more.”The self-navigating drones rely on image-recognition algorithms that have been around for over a decade, says Lange. And the mass deployments of drones on Ukrainian battlefields are enabling both Russian and Ukrainian technologists to create huge datasets that improve the training and precision of those AI algorithms. A Ukrainian land robot, the Ravlyk, can be outfitted with a machine gun.While uncrewed aerial vehicles (UAVs) have received the most attention, the Ukrainian military is also deploying dozens of different kinds of drones on land and sea. Ukraine, struggling with the shortage of infantry personnel, began working on replacing a portion of human soldiers with wheeled ground robots in 2024. As of early 2026, thousands of ground robots are crawling across the gray zone along the front line in Eastern Ukraine. Most are used to deliver supplies to the front line or to help evacuate the wounded, but some “killer” ground robots fitted with turrets and remotely controlled machine guns have also been tested.In mid-February, Ukrainian authorities released a video of a Ukrainian ground robot using its thermal camera to detect a Russian soldier in the dark of the night and then kill the invader with a round from a heavy machine gun. So far these robots are mostly controlled by a human operator, but the makers of these uncrewed ground vehicles say their systems are capable of basic autonomous operations, such as returning to base when radio connection is lost. The goal is to enable them to swarm so that one operator controls not one, but a whole herd of mesh-connected killer robots.But Bryan Clark, senior fellow and director of the Center for Defense Concepts and Technology at the Hudson Institute, questions how quickly ground robots’ abilities can progress. “Ground environments are very difficult to navigate in because of the terrain you have to address,” he says. “The line of sight for the sensors on the ground vehicles is really constrained because of terrain, whereas an air vehicle can see everything around it.”To achieve autonomy, maritime drones, too, will require navigaitional approaches beyond AI-based image recognition, possibly based on star positions or electronic signals from radios and cell towers that are within reach, says Clark. Such technologies are still being developed or are in a relatively early operational stage.How the Shaheds Got BetterRussia is not lagging behind. In fact, some analysts believe its autonomous systems may be slightly ahead of Ukraine’s. For a good example of the Russian military’s rapid evolution, they say, consider the long-range Iranian-designed Shahed drones. Since 2022, Russia has been using them to attack Ukrainian cities and other targets hundreds of kilometers from the front line. “At the beginning, Shaheds just had a frame, a motor, and an inertial navigation system,” Oleksii Solntsev, CEO of Ukrainian defense tech startup MaXon Systems, tells me. “They used to be imprecise and pretty stupid. But they are becoming more and more autonomous.” Solntsev founded MaXon Systems in late 2024 to help protect Ukrainian civilians from the growing threat of Shahed raids. A Russian Geran-2 drone, based on the Iranian Shahed-136, flies over Kyiv during an attack on 27 December 2025.SERGEI SUPINSKY/AFP/GETTY IMAGESFirst produced in Iran in the 2010s, Shaheds can carry 90-kilogram warheads up to 650 km (50-kg warheads can go twice as far). They cost around$35,000 per unit, compared to a couple of million dollars, at least, for a ballistic missile. The low cost allows Russia to manufacture Shaheds in high quantities, unleashing entire fleets onto Ukrainian cities and infrastructure almost every night.The early Shaheds were able to reach a preprogrammed location based on satellite-navigation coordinates. Even one of these early models could frequently overcome the jamming of satellite-navigation signals with the help of an onboard inertial navigation unit. This was essentially a dead-reckoning system of accelerators and gyroscopes that estimate the drone’s position from continual measurements of its motions. In the Donetsk Region, on 15 August 2025, a Ukrainian soldier hunts for Shaheds and other drones with a thermalimaging system attached to a ZU23 23-millimeter antiaircraft gun.KOSTYANTYN LIBEROV/LIBKOS/GETTY IMAGESUkrainian defense forces learned to down Shaheds with heavy machine guns, but as Russia continued to innovate, the daily onslaughts started to become increasingly effective.Today’s Shaheds fly faster and higher, and therefore are more difficult to detect and take down. Between January 2024 and August 2025, the number of Shaheds and Shahed-type attack drones launched by Russia into Ukraine per month increased more than tenfold, from 334 to more than 4,000. In 2025, Ukraine found AI-enabling Nvidia chipsets in wreckages of Shaheds, as well as thermal-vision modules capable of locking onto targets at night.“Now, they are interconnected, which allows them to exchange information with each other,” Solntsev says. “They also have cameras that allow them to autonomously navigate to objects. Soon they will be able to tell each other to avoid a jammed region or an area where one of them got intercepted.”These Russian-manufactured Shaheds, which Russian forces call Geran-2s, are thought to be more capable than the garden variety Shahed-136s that Iran has lately been launching against targets throughout the Middle East. Even the relatively primitive Shahed-136s have done considerable damage, according to press accounts.Those Shahed successes may accrue, at least in part, from the fact that the United States and Israel lack Ukraine’s long experience with fending them off. In just two days in early March, upward of a thousand drones, mostly Shaheds, were launched against U.S. and Israeli targets, with hundreds of them reportedly finding their marks.One attack, caught on videotape, shows a Shahed destroying a radar dome at the U.S. navy base in Manama, Bahrain. U.S. forces were understood to be attempting to fend off the drones by striking launch platforms, dispatching fighter aircraft to shoot them down, and by using some extremely costly air-defense interceptors, including ones meant to down ballistic missiles. On 4 March, CNN reported that in a congressional briefing the day before, top U.S. defense officials, including Secretary of Defense Pete Hegseth, acknowledged that U.S. air defenses weren’t keeping up with the onslaught of Shahed drones. Russian V2U attack drones are outfitted with Nvidia processors and run computer-vision software and AI algorithms to enable the drones to navigate autonomously.GUR OF THE MINISTRY OF DEFENSE OF UKRAINERussia is also starting to field a newer generation of attack drones. One of these, the V2U, has been used to strike targets in the Sumy region of northeastern Ukraine. The V2U drones are outfitted with Nvidia Jetson Orin processors and run computer-vision software and AI algorithms that allow the drones to navigate even where satellite navigation is jammed.The sale of Nvidia chips to Russia is banned under U.S. sanctions against the country. However, press reports suggest that the chips are getting to Russia via intermediaries in India.Antidrone Systems Step UpMaXon Systems is one of several companies working to fend off the nightly drone onslaught. Within one year, the company developed and battle-tested a Shahed interception system that hints at the sci-fi future envisioned by Azhnyuk. For a system to be capable of reliably defending against autonomous weaponry, it, too, needs to be autonomous.MaXon’s solution consists of ground turrets scanning the sky with infrared sensors, with additional input from a network of radars that detects approaching Shahed drones at distances of, typically, 12 to 16 km. The turrets fire autonomous fixed-winged interceptor drones, fitted with explosive warheads, toward the approaching Shaheds at speeds of nearly 300 km/h. To boost the chances of successful interception, MaXon is also fielding an airborne anti-Shahed fortification system consisting of helium-filled aerostats hovering above the city that dispatch the interceptors from a higher altitude.“We are trying to increase the level of automation of the system compared to existing solutions,” says Solntsev. “We need automatic detection, automatic takeoff, and automatic mid-track guidance so that we can guide the interceptor before it can itself flock the target.” An interceptor drone, part of the U.S. MEROPS defensive system, is tested in Poland on 18 November 2025.WOJTEK RADWANSKI/AFP/GETTY IMAGESIn November 2025, the Ukrainian military announced it had been conducting successful trials of the Merops Shahed drone interceptor system developed by the U.S. startup Project Eagle, another of former Google CEO Eric Schmidt’s Ukraine defense ventures. Like the MaXon gear, the system can operate largely autonomously and has so far downed over 1,000 Shaheds.What Works in the Lab Doesn’t Necessarily Fly on the Battlefield Despite the progress on both sides, analysts say that the kind of robotic warfare imagined by Azhnyuk won’t be a reality for years.“The software for drone collaboration is there,” says Kate Bondar, a former policy advisor for the Ukrainian government and currently a research fellow at the U.S. Center for Strategic and International Studies. “Drones can fly in labs, but in real life, [the forces] are afraid to deploy them because the risk of a mistake is too high,” she adds. Ukrainian soldiers watch a GOR reconnaissance drone take to the sky near Pokrovsk in the Donetsk region, on 10 March 2025.ANDRIY DUBCHAK/FRONTLINER/GETTY IMAGESIn Bondar’s view, powerful AI-equipped drones won’t be deployed in large numbers given the current prices for high-end processors and other advanced components. And, she adds, the more autonomous the system needs to be, the more expensive are the processors and sensors it must have. “For these cheap attack drones that fly only once, you don’t install a high-resolution camera that [has] the resolution for AI to see properly,” she says. “[You install] the cheapest camera. You don’t want expensive chips that can run AI algorithms either. Until we can achieve this balance of technological sophistication, when a system can conduct a mission but at the lowest price possible, it won’t be deployed en masse.”While existing AI systems are doing a good job recognizing and following large objects like Shaheds or tanks, experts question their ability to reliably distinguish and pursue smaller and more nimble or inconspicuous targets. “When we’re getting into more specific questions, like can it distinguish a Russian soldier from a Ukrainian soldier or at least a soldier from a civilian? The answer is no,” says Bondar. “Also, it’s one thing to track a tank, and it’s another to track infantrymen riding buggies and motorcycles that are moving very fast. That’s really challenging for AI to track and strike precisely.”Clark, at the Hudson Institute, says that although the AI algorithms used to guide the Russian and Ukrainian drones are “pretty good,” they rely on information provided bysensors that “aren’t good enough.” “You need multiphenomenology sensors that are able to look at infrared and visual and, in some cases, different parts of the infrared spectrum to be able to figure out if something is a decoy or real target,” he says.German defense analyst Lange agrees that right now, battlefield AI image-recognition systems are too easily fooled. “If you compress reality into a 2D image, a lot of things can be easily camouflaged—like what Russia did recently, when they started drawing birds on the back of their drones,” he says.Autonomy Remains Elusive on the Ground and at Sea, TooTo make Ukraine’s emerging uncrewed ground vehicles (UGVs) equally self-sufficient will be an even greater task, in Clark’s view. Still,Bondar expects major advances to materialize within the next several years, even if humans are still going to be part of the deci-sion-making loop. A mobile electronic-warfare system built by PiranhaTech is demonstrated near Kyiv on 21 October 2025.DANYLO ANTONIUK/ANADOLU/GETTY IMAGES“I think in two or three years, we will have pretty good full autonomy, at least in good weather conditions,” she says, referring to aerial drones in particular. “Humans will still be in the loop for some years, simply because there are so many unpredictable situations when you need an intervention. We won’t be able to fully rely on the machine for at least another 10 or 15 years.”Ukrainian defenders are apprehensive about that autonomous future. The boom of drone innovation has come hand in hand with the development of sophisticated jamming and radio-frequency detection systems. But a lot of that innovation will become obsolete once the pendulum swings away from human control. Ukrainians got their first taste of dealing with unjammable drones in mid-2024, when Russia began rolling out fiber-optic tethered drones. Now they have to brace for a threat on a much larger scale. An experimental drone is demonstrated at the Brave1 defense-tech incubator in Kyiv.DANYLO DUBCHAK/FRONTLINER/GETTY IMAGES“Today, we have a situation where we have lots of signals on the battlefield, but in the near future, in maybe two to five years, UAVs are not going to be sending any signals,” says Oleksandr Barabash, CTO of Falcons, a Ukrainian startup that has developed a smart radio-frequency detection system capable of revealing precise locations of enemy radio sources such as drones, control stations, and jammers.Last September, Falcons secured funding from the U.S.-based dual-use tech fund Green Flag Ventures to scale production of its technology and work toward NATO certification. But Barabash admits that its system, like all technologies fielded in Ukrainian war zones, has an expiration date. Instead of radio-frequency detectors, Barabash thinks, the next R&D push needs to focus on passive radar systems capable of identifying small and fast-moving targets based on the signal from sources like TV towers or radio transmitters that propagate through the environment and are reflected by those moving targets. Passive radars have a significant advantage in the war zone, according to Barabash. Since they don’t emit their own signal, they can’t be that easily discovered by the enemy.“Active radar is emitting signals, so if you are using active radars, you are target No. 1 on the front line,” Barabash says.Bondar, on the other hand, thinks that the increased onboard compute power needed for AI-controlled drones will, by itself, generate enough electromagnetic radiation to prevent autonomous drones from ever operating completely undetectably.“You can have full autonomy, but you will still have systems onboard that emit electromagnetic radiation or heat that can be detected,” says Bondar. “Batteries emit electromagnetic radiation, motors emit heat, and [that heat can be] visible in infrared from far away. You just need to have the right sensors to be able to identify it in advance.” She adds that that takeaway is “how capable contemporary detection systems have become and how technically challenging it is to design drones that can reliably operate in the Ukrainian battlefield environment.”There Will Be Nowhere to Hide from Autonomous DronesWhen autonomous drones become a standard weapon of war, their threat will extend far beyond the battlefields of Ukraine. Autonomous turrets and drone-interceptor fortification might soon dot the perimeter of European cities, particularly in the eastern part of the continent. A fixed-wing drone is tested in Ukraine in April 2025.ANDREWKRAVCHENKO/BLOOMBERG/GETTY IMAGESNefarious actors from all over the world have closely watched Ukraine and taken notes, warns Lange. Today, FPV drones are being used by Islamic terrorists in Africa and Mexican drug cartels to fight against local authorities.When autonomous killing machines become widely available, it’s likely that no city will be safe. “We might see nets above city centers, protecting civilian streets,” Lange says. “In every case, the West needs to start performing similar kinetic-defense development that we see in Ukraine. Very rapid iteration and testing cycles to find solutions.”Azhnyuk is concerned that the historic defenders of Europe—the United States and the European countries themselves—are falling behind. “We are in danger,” he says. While Russia and Ukraine made major strides in their drones and countermeasures over the past year, “Europe and the United States have progressed, in the best-case scenario, from the winter-of-2022 technology to the summer-of-2022 technology.“The gap is getting wider,” he warns. “I think the next few years are very dangerous for the security of Europe.”
IEEE Spectrum
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA 2026: 1–5 June 2026, VIENNASummer School on Multi-Robot Systems: 29 July–4 August 2026, PRAGUEEnjoy today’s videos! Human athletes demonstrate versatile and highly dynamic tennis skills to successfully conduct competitive rallies with a high-speed tennis ball. However, reproducing such behaviors on humanoid robots is difficult, partially due to the lack of perfect humanoid action data or human kinematic motion data in tennis scenarios as reference. In this work, we propose LATENT, a system that Learns Athletic humanoid TEnnis skills from imperfect human motioN daTa.[ LATENT ]A beautifully designed robot inspired by Strandbeests.[ Cranfield University ]We believe we’re the first robotics company to demonstrate a robot peeling an apple with dual dexterous human-like hands. This breakthrough closes a key gap in robotics, achieving bimanual, contact-rich manipulation and moving far beyond the limits of simple grippers.Today’s AI models (VLMs) are excellent at perception but struggle with action. Controlling high-degree-of-freedom hands for tasks like this is incredibly complex, and precise finger-level teleoperation is nearly impossible for humans. Our first step was a shared-autonomy system: rather than controlling every finger, the operator triggers pre-learned skills like a “rotate apple or tennis ball” primitive via a keyboard press or pedal. This makes scalable data collection and RL training possible.How does the AI manage this? We created “MoDE-VLA” (Mixture of Dexterous Experts). It fuses vision, language, force, and touch data by using a team of specialist “experts,” making control in high-dimensional spaces stable and effective. The combination of these two innovations allows for seamless, contact-rich manipulation. The human provides high-level guidance, and the robot executes the complex in-hand coordination required.[ Sharpa ]Thanks, Alex!It was great to see our name amongst the other “AI Native” companies during the NVIDIA GTC keynote. NVIDIA Isaac Lab helps us train reinforcement learning policies that enable the UMV to drive, jump, flip, and hop like a pro.[ Robotics and AI Institute ]This Finger-Tip Changer technology was jointly researched and developed through a collaboration between Tesollo and RoCogMan LaB at Hanyang University ERICA. The project integrates Tesollo’s practical robotic hand development experience with the lab’s expertise in robotic manipulation and gripper design.I don’t know why more robots don’t do this. Also, those pointy fingertips are terrifying.[ RoCogMan LaB ]Here’s an upcoming ICRA paper from the Fluent Robotics Lab at the University of Michigan featuring an operational PR2! With functional batteries!!![ Fluent Robotics Lab ]This video showcases the field tests and interaction capabilities of KAIST Humanoid v0.7, developed at the DRCD Lab featuring in-house actuators. The control policy was trained through deep reinforcement learning leveraging human demonstrations.[ KAIST DRCD Lab ]This needs to come in adult size.[ DEEP Robotics ]I did not know this, but apparently shoeboxes are really annoying to manipulate because if you grab them by the lid, they just open, so specialized hardware is required.[ Nomagic ]Thanks, Gilmarie!This paper presents a method to recover quadrotor Unmanned Air Vehicles (UAVs) from a throw, when no control parameters are known before the throw.[ MAVLab ]Uh oh, robots can see glass doors now. We’re in trouble.[ LimX Dynamics ]This drone hugs trees
IEEE Spectrum
A technical examination of the sensing, motion control, power, and thermal challenges facing humanoid robotics engineers — with component-level design strategies for real-world deployment.What Attendees will LearnWhy motion control remains the hardest unsolved problem — Explore the modelling complexity, real-time feedback requirements, and sensor fusion demands of maintaining stable bipedal locomotion across dynamic environments.How sensing architectures enable perception and safety — Understand the role of inertial measurement units, force/torque feedback, and tactile sensing in achieving reliable human-robot interaction and collision avoidance.What power and thermal constraints mean for system design — Examine the trade-offs in battery chemistry selection (LFP vs. NCA), DC/DC converter topologies, and thermal protection strategies that determine operational endurance.How the industry is transitioning from prototype to mass production — Learn about the shift toward modular architectures, cost-driven component selection, and supply chain readiness projected for the late 2020s.Download this free whitepaper now!
IEEE Spectrum
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA 2026: 1–5 June 2026, VIENNAEnjoy today’s videos! All legged robots deployed “in the wild” to date were given a body plan that was predefined by human designers and could not be redefined in situ. The manual and permanent nature of this process has resulted in very few species of agile terrestrial robots beyond familiar four-limbed forms. Here, we introduce highly athletic modular building blocks and show how they enable the automatic design and rapid assembly of novel agile robots that can “hit the ground running” in unstructured outdoor environments.[ Northwestern UniversityCenter for Robotics and Biosystems ] [ Paper ] via [ Gizmodo ] If you were going to develop the ideal urban delivery robot more or less from scratch, it would be this.[ RIVR ]Don’t get me wrong, there are some clever things going on here, but I’m still having a lot of trouble seeing where the unique, sustainable value is for a humanoid robot performing these sorts of tasks.[ Figure ]One of those things that you don’t really think about as a human, but is actually pretty important.[ Paper ] via [ ETH Zurich ]We propose TRIP-Bag (Teleoperation, Recording, Intelligence in a Portable Bag), a portable, puppeteer-style teleoperation system fully contained within a commercial suitcase, as a practical solution for collecting high-fidelity manipulation data across varied settings.[ KIMLAB ]We propose an open-vocabulary semantic exploration system that enables robots to maintain consistent maps and efficiently locate (unseen) objects in semi-static real-world environments using LLM-guided reasoning.[ TUM ]That’s it folks, we have no need for real pandas anymore—if we ever did in the first place. Be honest, what has a panda done for you lately?[ MagicLab ]RoboGuard is a general-purpose guardrail for ensuring the safety of LLM-enabled robots. RoboGuard is configured offline with high-level safety rules and a robot description, reasons about how these safety rules are best applied in robot’s context, then synthesizes a plan that maximally follows user preferences while ensuring safety.[ RoboGuard ]In this demonstration, a small team responds to a (simulated) radiation contamination leak at a real nuclear reactor facility. The team deploys their reconfigurable robot to accompany them through the facility. As the station is suddenly plunged into darkness, the robot’s camera is hot-swapped to thermal so that it can continue on. Upon reaching the approximate location of the contamination, the team installs a Compton gamma-ray camera and pan-tilt illuminating device. The robot autonomously steps forward, locates the radiation source, and points it out with the illuminator.[ Paper ]On March 6th, 2025, the Robomechanics Lab at CMU was flooded with 4 feet of black water (i.e. mixed with sewage). We lost most of the robots in the lab, and as a tribute my students put together this “In Memoriam” video. It includes some previously unreleased robots and video clips.[ Carnegie Mellon University Robomechanics Lab ]There haven’t been a lot of successful education robots, but here’s one of them.[ Sphero ]The opening keynote from the 2025 Silicon Valley Humanoids Summit: “Insights Into Disney’s Robotic Character Platform,” by Moritz Baecher, Director, Zurich Lab, Disney Research.[ Humanoids Summit ]
IEEE Spectrum
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA 2026: 1–5 June 2026, VIENNAEnjoy today’s videos! The functional replication and actuation of complex structures inspired by nature is a longstanding goal for humanity. Creating such complex structures combining soft and rigid features and actuating them with artificial muscles would further our understanding of natural kinematic structures. We printed a biomimetic hand in a single print process composed of a rigid skeleton, soft joint capsules, tendons, and printed touch sensors.[ Paper ] via [ SRL ]Two Boston Dynamics product managers talk about their favorite classic BD robots, and then I talk about mine.And this is Boston Dynamics’ LittleDog, doing legged locomotion research 16 or so years ago in what I’m pretty sure is Katie Byl’s lab at UCSB.[ Boston Dynamics ]This is our latest work on the trajectory planning method for floating-based articulated robots, enabling the global path for searching in complex and cluttered environments.[ DRAGON Lab ]Thanks, Moju!OmniPlanner is a unified solution for exploration and inspection-path planning (as well as target reach) across aerial, ground, and underwater robots. It has been verified through extensive simulations and a multitude of field tests, including in underground mines, ballast water tanks, forests, university buildings, and submarine bunkers.[ NTNU ]Thanks, Kostas!In the ARISE project, the FZI Research Center for Information Technology and its international partners ETH Zurich, University of Zurich, University of Bern, and University of Basel took a major step toward future lunar missions by testing cooperative autonomous multirobot teams under outdoor conditions.[ FZI ]Welcome to the future, where there are no other humans.[ Zhejiang Humanoid ]This is our latest work on robotic fish, and it’s also the first underwater robot from DRAGON Lab. [ DRAGON Lab ]Thanks, Moju!Watch this one simple trick to make humanoid robots cheaper and safer![ Zhejiang Humanoid ]Gugusse and the Automaton is a 1897 French film by Georges Méliès featuring a humanoid robot in a depiction that’s nearly as realistic as some of the humanoid promo videos we’ve seen lately.[ Library of Congress ] via [ Gizmodo ]At Agility, we create automated solutions for the hardest work. We’re incredibly proud of how far we’ve come, and can’t wait to show you what’s next.[ Agility ]Kamel Saidi, robotics program manager at the National Institute of Standards and Technology (NIST), on how performance standards can pave the way for humanoid adoption.[ Humanoids Summit ]Anca Dragan is no stranger to Waymo. She worked with us for six years while also at UC Berkeley and now at Google DeepMind. Her focus on making AI safer helped Waymo as it launched commercially. In this final episode of our season, Anca describes how her work enables AI agents to work fluently with people, based on human goals and values.[ Waymo Podcast ]This UPenn GRASP SFI Seminar is by Junyao Shi: “Unlocking Generalist Robots with Human Data and Foundation Models.”Building general-purpose robots remains fundamentally constrained by data scarcity and labor-intensive engineering. Unlike vision and language, robotics lacks large, diverse datasets that span tasks, environments, and embodiments, thus limiting both scalability and generalization. This talk explores how human data and foundation models trained at scale can help overcome these bottlenecks.[ UPenn ]
IEEE Spectrum
Self-driving cars often struggle with situations that are commonplace for human drivers. When confronted with construction zones, school buses, power outages, or misbehaving pedestrians, these vehicles often behave unpredictably, leading to crashes or freezing events, causing significant disruption to local traffic and possibly blocking first responders from doing their jobs. Because self-driving cars cannot successfully handle such routine problems, self-driving companies use human babysitters to remotely supervise them and intervene when necessary.This idea—humans supervising autonomous vehicles from a distance—is not new. The U.S. military has been doing it since the 1980s with unmanned aerial vehicles (UAVs). In those early years, the military experienced numerous accidents due to poorly designed control stations, lack of training, and communication delays.As a Navy fighter pilot in the 1990s, I was one of the first researchers to examine how to improve the UAV remote supervision interfaces. The thousands of hours I and others have spent working on and observing these systems generated a deep body of knowledge about how to safely manage remote operations. With recent revelations that U.S. commercial self-driving car remote operations are handled by operators in the Philippines, it is clear that self-driving companies have not learned the hard-earned military lessons that would promote safer use of self-driving cars today.While stationed in the Western Pacific during the Gulf War, I spent a significant amount of time in air operations centers, learning how military strikes were planned, implemented and then replanned when the original plan inevitably fell apart. After obtaining my PhD, I leveraged this experience to begin research on the remote control of UAVs for all three branches of the U.S. military. Sitting shoulder-to-shoulder in tiny trailers with operators flying UAVs in local exercises or from 4000 miles away, my job was to learn about the pain points for the remote operators as well as identify possible improvements as they executed supervisory control over UAVs that might be flying halfway around the world.Supervisory control refers to situations where humans monitor and support autonomous systems, stepping in when needed. For self-driving cars, this oversight can take several forms. The first is teleoperation, where a human remotely controls the car’s speed and steering from afar. Operators sit at a console with a steering wheel and pedals, similar to a racing simulator. Because this method relies on real-time control, it is extremely sensitive to communication delays.The second form of supervisory control is remote assistance. Instead of driving the car in real time, a human gives higher-level guidance. For example, an operator might click a path on a map (called laying “breadcrumbs”) to show the car where to go, or interpret information the AI cannot understand, such as hand signals from a construction worker. This method tolerates more delay than teleoperation but is still time-sensitive.Five Lessons From Military Drone OperationsOver 35 years of UAV operations, the military consistently encountered five major challenges during drone operations which provide valuable lessons for self-driving cars.LatencyLatency—delays in sending and receiving information due to distance or poor network quality—is the single most important challenge for remote vehicle control. Humans also have their own built-in delay: neuromuscular lag. Even under perfect conditions, people cannot reliably respond to new information in less than 200–500 milliseconds. In remote operations, where communication lag already exists, this makes real-time control even more difficult.In early drone operations, U.S. Air Force pilots in Las Vegas (the primary U.S. UAV operations center) attempted to take off and land drones in the Middle East using teleoperation. With at least a two-second delay between command and response, the accident rate was 16 times that of fighter jets conducting the same missions . The military switched to local line-of-sight operators and eventually to fully automated takeoffs and landings. When I interviewed the pilots of these UAVs, they all stressed how difficult it was to control the aircraft with significant time lag.Self-driving car companies typically rely on cellphone networks to deliver commands. These networks are unreliable in cities and prone to delays. This is one reason many companies prefer remote assistance instead of full teleoperation. But even remote assistance can go wrong. In one incident, a Waymo operator instructed a car to turn left when a traffic light appeared yellow in the remote video feed—but the network latency meant that the light had already turned red in the real world. After moving its remote operations center from the U.S. to the Philippines, Waymo’s latency increased even further. It is imperative that control not be so remote, both to resolve the latency issue but also increase oversight for security vulnerabilities.Workstation DesignPoor interface design has caused many drone accidents. The military learned the hard way that confusing controls, difficult-to-read displays, and unclear autonomy modes can have disastrous consequences. Depending on the specific UAV platform, the FAA attributed between 20% and 100% of Army and Air Force UAV crashes caused by human error through 2004 to poor interface design.UAV crashes (1986-2004) caused by human factors problems, including poor interface and procedure design. These two categories do not sum to 100% because both factors could be present in an accident.Human Factors Interface Design Procedure Design Army Hunter 47% 20% 20% Army Shadow 21% 80% 40%Air Force Predator 67% 38% 75% Air Force Global Hawk 33% 100% 0%Many UAV aircraft crashes have been caused by poor human control systems. In one case, buttons were placed on the controllers such that it was relatively easy to accidentally shut off the engine instead of firing a missile. This poor design led to the accidents where the remote operators inadvertently shut the engine down instead of launching a missile. The self-driving industry reveals hints of comparable issues. Some autonomous shuttles use off-the-shelf gaming controllers, which—while inexpensive—were never designed for vehicle control. The off-label use of such controllers can lead to mode confusion, which was a factor in a recent shuttle crash. Significant human-in-the-loop testing is needed to avoid such problems, not only prior to system deployment, but also after major software upgrades.Operator WorkloadDrone missions typically include long periods of surveillance and information gathering, occasionally ending with a missile strike. These missions can sometimes last for days; for example, while the military waits for the person of interest to emerge from a building. As a result, the remote operators experience extreme swings in workload: sometimes overwhelming intensity, sometimes crushing boredom. Both conditions can lead to errors.When operators teleoperate drones, workload is high and fatigue can quickly set in. But when onboard autonomy handles most of the work, operators can become bored, complacent, and less alert. This pattern is well documented in UAV research.Self-driving car operators are likely experiencing similar issues for tasks ranging from interpreting confusing signs to helping cars escape dead ends. In simple scenarios, operators may be bored; in emergencies—like driving into a flood zone or responding during a citywide power outage—they can become quickly overwhelmed.The military has tried for years to have one person supervise many drones at once, because it is far more cost effective. However, cognitive switching costs (regaining awareness of a situation after switching control between drones) result in workload spikes and high stress. That coupled with increasingly complex interfaces and communication delays have made this extremely difficult.Self-driving car companies likely face the same roadblocks. They will need to model operator workloads and be able to reliably predict what staffing should be and how many vehicles a single person can effectively supervise, especially during emergency operations. If every self-driving car turns out to need a dedicated human to pay close attention, such operations would no longer be cost-effective.TrainingEarly drone programs lacked formal training requirements, with training programs designed by pilots, for pilots. Unfortunately, supervising a drone is more akin to air traffic control than actually flying an aircraft, so the military often placed drone operators in critical roles with inadequate preparation. This caused many accidents. Only years later did the military conduct a proper analysis of the knowledge, skills, and abilities needed to conduct safe remote operations, and changed their training program.Self-driving companies do not publicly share their training standards, and no regulations currently govern the qualifications for remote operators. On-road safety depends heavily on these operators, yet very little is known about how they are selected or taught. If commercial aviation dispatchers are required to have formal training overseen by the FAA, which are very similar to self-driving remote operators, we should hold commercial self-driving companies to similar standards.Contingency PlanningAviation has strong protocols for emergencies including predefined procedures for lost communication, backup ground control stations, and highly reliable onboard behaviors when autonomy fails. In the military, drones may fly themselves to safe areas or land autonomously if contact is lost. Systems are designed with cybersecurity threats—like GPS spoofing—in mind.Self-driving cars appear far less prepared. The 2025 San Francisco power outage left Waymo vehicles frozen in traffic lanes, blocking first responders and creating hazards. These vehicles are supposed to perform “minimum-risk maneuvers” such as pulling to the side—but many of them didn’t. This suggests gaps in contingency planning and basic fail-safe design.The history of military drone operations offers crucial lessons for the self-driving car industry. Decades of experience show that remote supervision demands extremely low latency, carefully designed control stations, manageable operator workload, rigorous, well-designed training programs, and strong contingency planning.Self-driving companies appear to be repeating many of the early mistakes made in drone programs. Remote operations are treated as a support feature rather than a mission-critical safety system. But as long as AI struggles with uncertainty, which will be the case for the foreseeable future, remote human supervision will remain essential. The military learned these lessons through painful trial and error, yet the self-driving community appears to be ignoring them. The self-driving industry has the chance—and the responsibility—to learn from our mistakes in combat settings before it harms road users everywhere.
IEEE Spectrum
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA 2026: 1–5 June 2026, VIENNAEnjoy today’s videos! Our robots Lynx M20 help transport harvested crops in mountainous farmland—tackling the rural “last mile” logistics challenge.[ Deep Robotics ]Once again, I would point out that now that we are reaching peak humanoid robots doing humanoid things, we are inevitably about to see humanoid robots doing nonhumanoid things.[ Unitree ]In a study, a team of researchers from the Max Planck Institute for Intelligent Systems, the University of Michigan, and Cornell University show that groups of magnetic microrobots can generate fluidic forces strong enough to rotate objects in different directions without touching them. These microrobot swarms can turn gear systems, rotate objects much larger than the robots themselves, assemble structures on their own, and even pull in or push away many small objects.[ Science ] via [ Max Planck Institute ]Bipedal—or two-legged—autonomous robots can be quite agile. This makes them useful for performing tasks on uneven terrain, such as carrying equipment through outdoor environments or performing maintenance on an oceangoing ship. However, unstable or unpredictable conditions also increase the possibility of a robot wipeout. Until now, there’s been a significant lack of research into how a robot recovers when its direction shifts—for example, a robot losing balance when a truck makes a quick turn. The team aims to fix this research gap.[ Georgia Tech ]Robotics is about controlling energy, motion, and uncertainty in the real world.[ Carnegie Mellon University ]Delicious dinner cooked by our robot Robody. We’ve asked our investors to speak about why they’re along for the ride.[ Devanthro ]Tilt-rotor aerial robots enable omnidirectional maneuvering through thrust vectoring, but introduce significant control challenges due to the strong coupling between joint and rotor dynamics. This work investigates reinforcement learning for omnidirectional aerial motion control on overactuated tiltable quadrotors that prioritizes robustness and agility.[ Dragon Lab ]At the [Carnegie Mellon University] Robotic Innovation Center’s 75,000-gallon water tank, members of the TartanAUV student group worked to further develop their autonomous underwater vehicle (AUV) called Osprey. The team, which takes part in the annual RoboSub competition sponsored by the U.S. Office of Naval Research, is comprised primarily of undergraduate engineering and robotics students.[ Carnegie Mellon University ]Sure seems like the only person who would want a robot dog is a person who does not in fact want a dog.Compact size, industrial capability. Maximum torque of 90N·m, over 4 hours of no-load runtime, IP54 rainproof design. With a 15-kg payload, range exceeds 13 km. Open secondary development, empowering industry applications.[ Unitree ]If your robot video includes tasty baked goods it will be included in Video Friday.[ QB Robotics ]Astorino is a 6-axis educational robot created for practical and affordable teaching of robotics in schools and beyond. It has been created with 3D printing, so it allows for experimentation and the possible addition of parts. With its design and programming, it replicates the actions of industrial robots giving students the necessary skills for future work.[ Astorino by Kawasaki ]We need more autonomous driving datasets that accurately reflect how sucky driving can be a lot of the time.[ ASRL ]This Carnegie Mellon University Robotics Institute Seminar is by CMU’s own Victoria Webster-Wood, on “Robots as Models for Biology and Biology and Materials for Robots.”In the last century, it was common to envision robots as shining metal structures with rigid and halting motion. This imagery is in contrast to the fluid and organic motion of living organisms that inhabit our natural world. The adaptability, complex control, and advanced learning capabilities observed in animals are not yet fully understood, and therefore have not been fully captured by current robotic systems. Furthermore, many of the mechanical properties and control capabilities seen in animals have yet to be achieved in robotic platforms. In this talk, I will share an interdisciplinary research vision for robots as models for neuroscience and biology as materials for robots.[ CMU RI ]
IEEE Spectrum
This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.In past missions to Mars, like with the Curiosity and Opportunity rovers, the robots relied mostly on human instructions from millions of miles away in order to safely navigate the Martian landscape. The Perseverance rover, on the other hand, has zipped across the alien, boulder-ridden land almost completely autonomously, smashing previous records for autonomous driving on Mars. Whereas the Curiosity rover completed about 6.2 percent of its travels autonomously, Perseverance had completed about 90 percent of its travels autonomously, as of its 1,312th Martian day since landing (28 October 2024). Perseverance was able to accomplish such a feat—using remarkably little computing power—thanks to its specially designed autonomous driving algorithm, Enhanced Autonomous Navigation, or ENav. The full details on ENav’s inner workings and how well it has performed on Mars are described in a study published in IEEE Transactions on Field Robotics in November 2025. There are some advantages, but some serious challenges when it comes to autonomous navigation on Mars. On the plus side, almost nothing on the planet moves. Rocks and gravel slopes—while formidable obstacles—remain stationary, offering rovers consistency and predictability in their calculations and pathfinding. On the other hand, Mars is in large part uncharted terrain. “This enormous uncertainty is the major challenge,” says Masahiro “Hiro” Ono, supervisor of the Robotic Surface Mobility Group at NASA’s Jet Propulsion Laboratory, who helped develop ENav.Creating a Highly Autonomous Rover While some images from the space-borne Mars Reconnaissance Orbiter exist, these are usually not high enough resolution for ground-based navigation by a rover. In December, NASA engineers performed the first test of a navigation technique that uses a model based on Anthropic’s AI to analyze MRO images and generate waypoints—the coordinates used to guide the rover’s path—for more complete automation. RELATED: NASA Let AI Drive the Perseverance RoverBut in the majority of today’s navigation, Perseverance must rely on images the rover itself takes, analyze these to assess thousands of different paths, and choose the right route that won’t end in its own demise. The kicker? It must do so with the equivalent computing capacity of an iMac G3, an Apple computer sold in the late 1990s.The rover’s processor must undergo radiation hardening, a process that makes them resilient to the extreme levels of solar radiation and cosmic rays experienced on Mars. Although other radiation-hardened CPUs with more computing power were available at the time of Perseverance‘s development, the one used is proven to be reliable in the harsh conditions of outer space. By reusing hardware from previous missions—the same CPU was used in Curiosity—NASA can reduce costs while minimizing risk.Given its limited computing resources, the ENav algorithm was strategically designed to do the heaviest computing only when driving on challenging terrains. It works by analyzing images of its surroundings and assessing about 1,700 possible paths forward, typically within 6 meters from the rover’s current position. Assessing factors such as travel time and terrain roughness, it ranks possible paths. Finally, it runs a computationally heavy collision-checking algorithm, called ACE (approximate clearance estimation) on only on a handful of top-ranked potential paths. As of October 2024, Perseverance has driven more than 30 kilometers (18.65 miles) and collected 24 samples of rock and regolith. Source: JPL-Caltech/ASU/MSSS/NASAExploring the Red Planet with ENavPerseverance landed on Mars on 18 February 2021. In their study, Ono and his colleagues describe how the rover was initially deployed with strong human navigation oversight during its first 64 Martian days on the Red Planet, but then went on to predominantly use ENav to travel to one of the major exploration targets: the delta formed by an ancient river that once flowed into Jezero Crater billions of years ago. Scientists believe it could be a prime spot for finding evidence of past alien life, if life ever existed on Mars.After a brief exploration of an area southwest of its landing site, Perseverance jetted counterclockwise around sand dunes toward the ancient river delta at a crisp pace, averaging 201 meters per Martian day. (It’s too cold for the rover to travel at night.) Over the course of just 24 Martian days of driving, the rover traveled about 5 kilometers into the foothill of the delta. 95 percent of all driving that month was performed using the autonomous driving mode, resulting in an unprecedented amount of autonomous driving on Mars.Past rovers, such as Curiosity, had to stop and “think” about their paths before moving forward. “That was the main speed bump for Curiosity, why it was so slow to drive autonomously,” Ono explains. In contrast, Perseverance is able to think and drive at the same time. “Sometimes [Perseverance] has to stop and think, particularly when it cannot figure out a safe path quickly. But most of the time, particularly on easy terrains, it can keep driving without stopping,” Ono says. “That made its autonomous driving an order of magnitude faster.”Opportunity held the previous record for autonomous driving on Mars, traveling 109 meters in a single Martian day. But on 3 April 2023, Perseverance set a new record by driving 331.74 meters autonomously (and 347.69 meters in total) in a single Martian day. Ono says that fine-tuning the ENav algorithm took a lot of work, but he is happy with its performance. He also emphasizes that efforts to continue advancing autonomous navigation are critical if humans want to continue exploring even deeper into space, where Earthly communication with rovers and other spacecraft will become increasingly difficult.“The automation of the space systems is unstoppable direction that we have to go if we want to explore deeper in space,” Ono says. “This is the direction that we must go to push the boundaries and frontiers of space exploration.”This article was updated on 27 February to clarify NASA’s reasoning for selecting the CPU used in the Perseverance rover.