Skip to navigation Skip to main content Skip to footer

Data in Flight: Establishing and Maintaining Trust in the Public Sector's Use of Drones

08 April 2025

By Lawrence Baker

Unmanned aircraft systems and public trust

Government entities at all levels (from Federal/Central/National to Local) are increasingly using Unmanned Aircraft Systems (UAS), commonly known as drones, for many purposes, including:

  • environmental monitoring
  • traffic monitoring
  • law enforcement
  • public safety
  • broader research initiatives

Using drones in these respects can be transformative for analyzing environmental and infrastructure conditions, traffic optimization, surveillance, rapid response, and search and rescue—particularly with Artificial Intelligence (AI)/Machine Learning (ML) integration. However, deploying these systems raises important cyber security and public trust considerations. 

Recent global conflicts have accelerated the adoption of low-cost, autonomous UAS, prompting governments worldwide to explore their use. Unlike commercial or military applications, broader public use of UAS must address varying security and safety risks due to their unique and oftentimes sensitive missions within multiple operating environments. 

Citizen perception plays a key role, as trust in these systems directly impacts their acceptance. For example, longstanding fears of over-surveillance in law enforcement—popularized in modern movies like The Minority Report—highlight the need for careful implementation in all areas of the public sector. You may also remember the confusion, concern, and frustration created by the mystery drone sightings in the Northeastern US in late 2024. In today's age, it doesn't take much to quickly understand the role of transparency and trust in creating broader buy-in of using drones to enhance public-facing services. 

Trustworthiness encompasses more than just a system's ability to perform reliably and securely; it also involves ethical conduct, transparency, accountability, and a demonstrated commitment to respecting public concerns, such as privacy and fairness. When public sector organizations communicate openly about how UAS technologies operate and are regulated and consistently deliver on their promises without compromising individual rights, they build the foundation for trust. 

A trustworthy system must withstand scrutiny—not only by meeting technical standards but also by adhering to societal values—so that the public feels confident, supported, situationally aware, and protected rather than monitored and controlled.

Our team here at NCC Group has examined the implications of UAS technology within public sector missions and further propose strategies for ensuring trust assurance in the systems that use it. We emphasize the need for robust security measures, continuous demonstration of trustworthiness, and proactive public engagement to provide operational effectiveness and societal acceptance.

Novel characteristics of UAS risks

In recent years, there has been a rising proliferation of UAS as improvements in the platform's capabilities, payloads, and connectivity have driven their production volumes and created positive feedback about reducing unit costs. In parallel, there have been developments in the regulatory frameworks, industry standards, and the commercial ecosystem for supporting systems and services, including safety and security.

Increased digitalization and connectivity inevitably introduce and increase cyber security threats; however, the adoption of centralized services, as often found in enterprise IT and consumer devices, potentially compounds the severity of the risks. Centralized services include Communication, Navigation, and Surveillance (CNS) systems, C2 link, Public Key Infrastructure (PKI), remote configuration, maintenance and software updates, monitoring, flight planning, etc.

These may escalate into threats affecting an entire fleet due to the lack of diversity in both systems and services, and the emergence of single points of failure could result in threats that affect an entire UAS fleet.
Secondly, there is a strong need for societal buy-in when using this technology in order to adequately address concerns related to its potential misuse, whether intentional misuse or an overreach of powers beyond the initial intention of safeguards or working norms. 

Public confidence in the various law enforcement agencies at all government levels is sensitive, and there will be a need to manage how the public perceives law enforcement use of UAS. There is a similar concern around data collected by any public office – how and what data is targeted for collection, how is that data protected, who has access to that data, how is it supposed to be used, and how could it potentially be used for malicious purposes or poisoned to skew actions resulting from data analysis? 

As such, it's not enough to merely comply with regulations or generic standards for commercial UAS and enterprise information security. Continuous demonstration of the trustworthiness of the solution is required from inception through to decommissioning.

Assuring Commercial Off-The-Shelf equipment

Commercial Off-The-Shelf (COTS) electronic hardware solutions, such as UAS, sensor payloads, and supporting ground systems, offer significant cost and functionality advantages. COTS solutions are likely to have been optimized for use cases subject to significantly different cyber security environments to those faced by the public sector, e.g., regulations, threat scenarios, threat actors, and risk impact severities.

Experience from similar situations where COTS have been increasingly used, such as Operational Technology (OT) in critical national infrastructure and the transport sector, suggests that cyber considerations are often recognized late in acquiring systems. This results in increased project costs, delays, and an overall sub-optimal solution that lacks the desired quality and thus missed opportunity costs. 

Attempts to 'bolt-on' security to existing solutions often lead to risks being accepted above the organization's risk appetite and create increased operational costs for compensating controls. While a bolt-on approach to security — and even a 'baked-in' approach where security controls are identified earlier on in the system acquisition lifecycle — may be considered acceptably secure at a point in time, it will likely be:

  • lacking cyber resilience and defense-in-depth
  • challenging to maintain from a security perspective
  • limited in its ability to be upgraded over its lifetime.

Recognition of the security challenges related to the use of COTS and how to manage them are reflected in some recent security standards, such as: 

ISO/SAE 21434' Road vehicles - Cybersecurity Engineering', 
ED-203A' Airworthiness Security Methods and Considerations'
TS-50701' Railway applications - Cybersecurity'

The use of Principles Based Assurance (PBA), as promoted by the UK National Cyber Security Centre (NCSC), provides a means for using COTS with a proportionate approach to managing risk and promoting technology diversity and, therefore, resilience while being demonstrably trustworthy. Indeed, this is aligned with the overarching discipline of systems security engineering, which considers security holistic over a system's lifecycle, as described in NIST SP800-160 vol 1, 'Engineering Trustworthy Secure Systems' and vol 2, 'Developing Cyber-Resilient Systems: A Systems Security Engineering Approach.

Approaches to drone assurance

Different approaches to assurance have different strengths, a good discussion of which can be found in section F.2.2 of NIST SP800-160 vol 1.

Axiomatic assurance (assurance by assertion) is commonly used in enterprise IT cyber security. It typically consists of assessments against a security control baseline. 

Synthetic assurance (assurance by structured reasoning) is the strongest and is best suited to complex and novel systems. It requires a holistic consideration of cyber security from the initial stages of system acquisition through to its decommissioning. Depending on the organization and use case, drone operations and the use of data collected by drones will be subject to a complex security environment, varied and advanced threat actors, and high levels of scrutiny, which would advocate the adoption of a synthetic assurance approach.

Existing safety and cyber security frameworks exist for UAS, but we should consider their relevance to the public sector's use of UAS. Civil aviation drone requirements are focused primarily on safety, which may result in solutions that cannot achieve the operational resilience required by a service subject to highly motivated and resourced threat actors, such as organized criminal groups and nation-state threat actors. 

Military UAS are more focused on their ability to fulfil a mission and balance the trade-offs against safety, but they do this against a context and risk tolerance that is unacceptable for almost any other use case.
Assurance cases are a best practice method for performing synthetic assurance. They're widely used for safety assurance, including military airborne platforms and air traffic management (ATM) systems. They are increasingly being promoted and used for security assurance, for example, in ISO/SAE 21434, TS 50701 and ED-205' Process Standard for Security Certification and Declaration of ATM ANS Ground Systems'. The Principles Based Assurance philosophy being introduced by the UK NCSC is based on the Claim-Argument-Evidence (CAE) approach for assurance cases.

UAS lifecycle considerations

Appropriate management of security risks requires understanding the threat actors and scenarios of a UAS platform, its supporting systems, and the associated data across the system's lifecycle. 

The following is a non-exhaustive list of examples:

Development and production
•  Supplier assurance
•  Supply chain security (e.g. counterfeit or malicious components, implementing hardware security modules (HSMs) and trusted execution environments (TEEs)
•  Firmware integrity
•  Documentation security (e.g., design schematics and software codebases)

Implementation
•  Secure configuration – (e.g., to prevent tampering)
•  Implementation
•  Access control – including minimizing insider threats
•  Firmware and software validation
•  Secure calibration records

Operation
•  Data security, integrity, and confidentiality onboard the UAS, during transmission and in the ground network
•  Security and resiliency of command-and-control links
•  Resilience of autonomous/automated operations
•  Redundancy, fail-safe and fail-secure mechanisms

Maintenance
•  Access logging
•  Secure software updates
•  Tamper detection
•  Lifecycle maintenance monitoring

Decommissioning
•  Ensuring secure decommissioning of systems during upgrades and end-of-life
•  Safe disposal of sensitive data

Specific observations for user areas of interest

The specific needs and technologies of interest for public sector use of UAS introduce cyber security threats, of which we emphasize two of particular interest. Firstly, the management of cyber security threats to safety needs to consider the following:

  • Beyond Visual Line of Sight (BVLOS) systems in unsegregated airspace and over uninvolved persons,
  • The ability to operate day and night in all weather conditions, and
  • Stable, reliable, and secure Command & Control (C2) and data downlink over extended operating ranges.

There has been considerable debate within specialist groups about how to manage the interplay between safety and security assurance. The current best practice is security-informed safety, which can be summarized as "If it's not secure, then it's not safe." Coordinating safety and security risk management activities is generally challenging for both technical and cultural reasons.

Secondly, AI-based systems may need to be considered as an aspect of the overriding assurance case for the trustworthiness of the system of interest because this may detrimentally impact the robustness of the chain of custody. AI/ML models are vulnerable to adversarial attacks and data poisoning, wherein adversaries leverage the weaknesses in models to alter their behaviors. 

In an adversarial attack, the input data images, texts, or signals are manipulated in a manner that is hard to detect and forces the model to make wrong predictions. Poisoning occurs when an attacker manages to inject manipulated data into the training set. In this way, the model gets biased or even compromised. These can affect security-critical applications in cyber security, finance, and autonomous systems, where strong defenses will be needed for adversarial training, anomaly detection, and model verification techniques.

The accuracy and security of ML/AI-based systems may need to be considered as an aspect of the overarching assurance case for the system's trustworthiness.

Final thoughts

The methods discussed here are not new, but their application is novel. Assurance cases are commonly used where trustworthiness is key, notably for safety cases, and are a mature, proven, and well-established approach. As stated earlier, the public sector use of UAS represents a complex system requiring high levels of assurance. Assurance cases can be readily adopted for this use case and include tailored goals or claims.

A proactive move to ensure public buy-in would be to make assurance cases readily accessible to the public from the outset of any acquisition of UAS (with appropriate redaction of evidence) and update this regularly through the acquisition lifecycle. This is a bold decision that Aurora, the developer of an autonomous trucking system, took by publishing their safety case online in recognition of the benefits this offered for gaining public trust for a novel and controversial new type of technology.

We are happy to support and champion others, specifically public sector organizations, that would like to follow suit.

A trusted partner of the aerospace sector

We never stop working to address the sector's biggest cyber challenges. Learn more about our tailored aerospace cyber security solutions: