Research Complete2025

ARES

Autonomous drone system that executes natural language mission commands

Key Highlights

  • Autonomous drone system that executes natural language mission commands
  • Northrop Grumman–sponsored senior design project at UCF
  • Natural language commands parsed by a locally-hosted LLM into ROS2 action sequences
  • Real-time SLAM via RTAB-Map for autonomous navigation
  • Successfully demonstrated agentic takeoff on real hardware
Next.jsROS2LangGraphOllamaPythonPX4YOLO

Video Demo

Northrop Grumman gave my UCF senior design group a research problem: build an agentic drone system that accepts natural language mission commands and executes them autonomously with a human in the loop. Something like "fly around this room and identify all the red boxes and take pictures of them" and the drone just does it, building a world view as it goes.

How the pieces fit together

I led the software architecture. The core idea: use a locally-hosted LLM through Ollama and LangGraph to parse natural language commands into structured ROS2 action sequences that Nav2 and PX4 could execute. The LLM handles the translation from intent to action plan. Nav2 handles the navigation execution. PX4 talks to the flight controller. RTAB-Map handles real-time SLAM as the drone flies, continuously building a 3D point cloud of its environment.

Custom ROS2 nodes handled image processing and we ran a YOLO model for identifying objects. A WebSocket bridge connected the ROS ecosystem to a React dashboard so a human operator could see live video, the SLAM feed, and telemetry simultaneously.

The human-in-the-loop would issue commands through a chat interface. They could clearly monitor the mission as the drone operated and at any point they could intervene to cancel the mission.

Building hardware from scratch with no hardware background

None of us were electrical or computer engineers, so designing and building a custom drone from a bare frame was very difficult. We had hardware delays due to getting our requests approved by faculty. Our first drone kit sold out while on order.

We had to design the drone to handle all of our needs and assemble it correctly. Because it was custom that required a lot of 3D printing and soldering. The hardware looked like this:

  • Jetson Orin Nano (Ros2, Nav2, Rtab-Map, Websocket Bridges)
  • ZED2i stereo camera for odemetry, video feed, and point cloud data
  • PixHawk flight controller for sending and carrying out low level commands (like accelerate, rotate, etc)
  • Wifi router and extender for a local network that the drone and react dashboard could communicate over

Sim-to-reality gap

The simulation environment in Gazebo worked reliably throughout, but the sim-to-reality gap was real. Behavior that was consistent in simulation often fell apart on the physical drone. Our drone was slightly overweight and probably had some design flaws so when we sent the Px4 commands that worked in simulations, the drone would often just crash

What we got to

The project was not a failure. We did accomplish a lot in the project even if the physical drone didn't work

  • Fully working simulation with complete mission execution
  • User friendly dashboard for mission control
  • Custom data bridges between ros2 and react
  • Functional SLAM algorithm and yolo perception
  • Agentic autonomous takeoff demonstrated with physical drone
  • 120 page research paper on the project

Our sponsor, Northrop Grumman, was impressed with our what we accomplished and so was the faculty at UCF. We were among 15 groups in the Computer Science department who were selected to present at the senior design showcase, an exhibition for the best senior design projects. At this showcase we placed 3rd amongst CS projects.