AVTECH AI
← RESOURCES / REGULATORY
REGULATORY · 7 min read

FAA's AI Safety Assurance Roadmap, explained for operators

The FAA published its plan for certifying AI in aircraft. Here's what it means if you're flying behind one.

In 2024, the FAA published a document called the Roadmap for Artificial Intelligence Safety Assurance, Version I. Most of the operator community has not read it, which is understandable. It's a regulatory roadmap, not exactly bedtime reading. But if you operate aircraft and you're going to encounter AI-based products in the next five years, it's worth understanding what the FAA is saying about how those products will be certified, and what that means for what you can trust.

This is the short version.

What the roadmap actually is

The roadmap is the FAA's stated approach to certifying machine learning systems in aircraft. It's not a regulation. It's a planning document that describes how the agency intends to handle the certification challenge that AI presents, which is fundamentally different from how aviation software has been certified historically.

Traditional avionics software is certified under DO-178C, which assumes the code is deterministic. Given the same input, you get the same output. You can trace every line back to a requirement and verify it. This works fine for autopilots and FMS computers, which are big complicated systems but ultimately predictable.

Machine learning systems don't work like that. The behavior of a trained neural network is determined by its weights, which are determined by training data, which can't be traced back to requirements the same way. You can run the same input through the same network and get the same output, but you can't easily explain why the output is what it is, and you can't verify it the way you verify normal code.

The FAA has to figure out how to certify this. The roadmap is them telling the industry what their approach will be.

What it says

The roadmap is built around four principles, and they matter for operators.

First, the FAA expects AI systems to be safety-assured, not just trained. There's a difference between an AI that performs well on a test set and an AI you can trust on an aircraft. The roadmap requires evidence that the system has been evaluated for safety across the operational envelope, not just statistically validated.

Second, the FAA expects human-in-the-loop architectures for any AI that affects flight-critical functions. This is enormous. It means that for the foreseeable future, certified AI in aircraft will be advisory or assistive, not autonomous. The system can recommend, alert, or augment. It cannot replace the pilot's authority.

Third, the FAA expects continuous monitoring of deployed systems. AI behavior can drift over time, especially if the system is updated or operates in conditions it wasn't trained for. The roadmap calls for ongoing assurance, not just one-time certification.

Fourth, the FAA acknowledges that the existing DO-178C and DO-254 frameworks are not sufficient by themselves. New means of compliance will be developed for AI systems.

What this means for you

For operators, three things follow.

If a vendor tells you their AI product is "FAA certified" and the product is doing more than displaying advisory information, ask questions. As of 2026, no machine learning system has been certified to operate on a flight-critical path in a civil aircraft. Daedalean and partners are working toward DAL-C certification for visual traffic detection, but they aren't there yet. Anything else claiming flight-critical certification is overstating what it actually has.

If a vendor's product is advisory only, it can operate today and the regulatory framework supports it. Advisory software doesn't need full DO-178C certification because it isn't replacing certified functions. The pilot remains the authority. The software is information.

If you're evaluating AI products for your operation, the questions to ask are about how the system fails, not just how it succeeds. Does it know when it's uncertain? Does it degrade gracefully when it loses a data source? Does it tell you when it's operating outside its training envelope? These are the questions the roadmap implicitly demands, and they're the right questions to bring to a vendor demo.

The FAA isn't trying to slow down AI in aviation. The roadmap is genuinely trying to find a path forward. But it's a path that takes time, and the things it allows today are advisory layers that augment the pilot. Anything more is going to be a longer wait.

MORE FROM AVTECH

See Avtech in your operation.

Advisory software for cockpit and fleet. Built for the way aviation actually works.

Request a demo →