Writing about aerospace and electronic systems, particularly with defense applications. Areas of interest include radar, sonar, space, satellites, unmanned plaforms, hypersonic platforms, and artificial intelligence.
According to the Missile Defense Agency., the Defense Department’s advanced missile tracking satellites logged their first views of a hypersonic flight test.
Hypersonic and Ballistic Tracking Space Sensor (HBTSS) satellites were able to track a hypersonic system.
MDA didn’t disclose the date of the flight, which took off from Wallops Island in Virginia.
The agency said in a June 14 statement, “Initial reports show the sensors successfully collected data after launch. MDA will continue to assess flight data over the next several weeks.”
Viewers may note, that the Missile Defense Agency (MDA) and Space Development Agency (SDA) are actively working on components of a hypersonic missile defense system to protect against hypersonic weapons and other emerging missile threats. This includes developing the tracking and transport layers of the Proliferated Warfighter Space Architecture (PWSA) and various interceptor programs.
In this video, Defense Updates analyzes why successful tracking of hypersonic flight by US HBTSS satellites is a crucial development ?
Chapters:
00:11 INTRODUCTION
02:15 HYPERSONIC WEAPON
04:09 TRACKING HYPERSONIC THREATS
06:49 AID IN HYPERSONIC WEAPON TESTING
06:49 ANALYSIS
Summary
Successful Tracking of Hypersonic Flight by US HBTSS Satellites: A Crucial Development
The successful tracking of a hypersonic flight test by the US HBTSS
satellites is a significant milestone in the development of hypersonic
missile defense systems. Hypersonic weapons, capable of flying at speeds
greater than Mach 5, pose a significant threat to modern air defenses
due to their extreme speed and maneuverability. The US DoD has
recognized the urgency of developing effective defenses against these
threats, stating that existing terrestrial- and space-based sensor
architectures are insufficient to detect and track hypersonic weapons.
The HBTSS satellites are part of the tracking layer of the Proliferated
Warfighter Space Architecture (PWSA), designed to provide global
indications, warning, tracking, and targeting of advanced missile
threats, including hypersonic missile systems. With the successful
tracking of the hypersonic flight test, the US is one step closer to
developing a comprehensive missile defense system against these emerging
threats.
Revolution at Mach 10: NASA-Backed Hypersonic Jets Poised to Transform Space Travel
Summary
This article discusses recent advancements in hypersonic jet technology for space travel, focusing on research conducted at the University of Virginia with NASA funding. Here are the key points:
1. Researchers are exploring hypersonic jets as a potential alternative to rocket-based space travel, aiming for aircraft-like spacecraft that can take off and land horizontally.
2. The study, published in Aerospace Science and Technology, demonstrates for the first time that airflow in supersonic combusting jet engines (scramjets) can be controlled using an optical sensor.
3. The team achieved adaptive control of a scramjet engine, which is a significant breakthrough for hypersonic propulsion.
4. The research builds on NASA's earlier X-43A and X-51 Waverider projects, which proved the concept of hypersonic flight but lacked reliable engine control.
5. Optical sensors could potentially replace traditional pressure sensors, offering faster response times and more comprehensive data about engine conditions.
6. The team used optical emission spectroscopy to analyze the flame within the engine, providing information about the engine's state that pressure sensors cannot capture.
7. The wind tunnel demonstration showed that engine control can be both predictive and adaptive, smoothly transitioning between scramjet and ramjet functioning.
8. This technology could lead to more efficient, safer, and reusable single-stage-to-orbit aircraft for space travel.
9. While more research is needed, the findings suggest that optical sensors could play a crucial role in controlling future hypersonic vehicles.
10. The ultimate goal is to develop aircraft-like space vehicles that combine cost-efficiency, safety, and reusability, potentially transforming space travel in the coming decades.
Based on the article, here are the researchers involved, their institutions, and related published work:
Researchers and Institution: 1. Christopher Goyne - Professor and Director of the UVA Aerospace Research Laboratory, University of Virginia School of Engineering and Applied Science 2. Chloe Dedic - Associate Professor, University of Virginia School of Engineering and Applied Science 3. Max Chern - Doctoral student, University of Virginia 4. Andrew Wanchek - Former graduate student, University of Virginia 5. Laurie Elkowitz - Doctoral student, University of Virginia 6. Robert Rockwell - Senior scientist, University of Virginia
All researchers are affiliated with the University of Virginia School of Engineering and Applied Science. The main research discussed in this article was published in the journal Aerospace Science and Technology in June 2024. The specific paper is:
Title: "Control of a dual-mode scramjet flow path utilizing optical emission spectroscopy" Authors: Max Y. Chern, Andrew J. Wanchek, Laurie Elkowitz, Robert D. Rockwell, Chloe E. Dedic and Christopher P. Goyne Publication Date: 18 April 2024 DOI: 10.1016/j.ast.2024.109144
The article mentions that this work was supported by a NASA ULI (University Leadership Initiative) grant led by Purdue University, suggesting collaboration with other institutions. This article describes an experimental study on controlling the shock train leading edge (STLE) location in a dual-mode scramjet (DMSJ) engine using optical emission spectroscopy (OES) sensors. Here are the key points:
1. The study demonstrates closed-loop control of STLE location using OES sensor feedback for the first time.
2. OES sensors measure light emitted by excited chemical species (OH* and C2*) in the combustor to estimate and control STLE location.
3. The research compares OES sensor feedback to traditional wall pressure sensors for STLE control.
4. Two control methods were tested and compared: - Proportional-Integral (PI) controller - Characteristic Model-Based All-Coefficient Adaptive Control (ACAC)
5. Experiments were conducted at the University of Virginia Supersonic Combustion Facility, simulating conditions for a Mach 5 flight vehicle.
6. OES sensors provided smoother response compared to discrete wall pressure measurements.
7. Both PI and ACAC controllers performed similarly under nominal conditions and with introduced plant changes.
8. ACAC tended to command the fuel valve more efficiently than the PI controller.
9. The study demonstrates the potential of using OES sensors for STLE control in scramjet engines, which could improve performance and prevent unstart events.
10. This research contributes to the development of hypersonic propulsion systems for military applications and reusable launch vehicles.
The article emphasizes the novelty of using OES sensors for STLE control and the first experimental demonstration of adaptive control using OES feedback in a scramjet combustor.
Sensor Comparison Metrics
Based on the article, the main metrics used to compare STLE control using OES sensors versus traditional pressure sensors were:
1. Smoothness of response: The article mentions that utilizing an OES sensor for feedback "provided a smoother response than when utilizing discrete wall-based pressure measurements that tended to discretize the estimated STLE location."
2. Bias in STLE location: When using pressure measurements (PRM) for feedback, a bias in STLE location was noticed. The algorithm favored locations near pressure measurements, causing a difference between PRM and OES sensor estimated STLE locations.
3. Controller performance: The study compared how the controllers (PI and ACAC) performed using OES feedback versus pressure sensor feedback. This included looking at their ability to maintain and adjust STLE location according to reference points.
4. Sampling rate: The article notes that one key advantage of OES sensors is "the faster sampling rate that an optical sensor can provide" compared to traditional pressure sensors.
5. Direct measurement: OES sensors were noted to provide "direct measurements of the combustion process itself rather than its effect," which is what pressure sensors measure.
6. Response to plant changes: The study examined how the control systems using different sensors responded to introduced changes, such as mass addition downstream of the combustor and reduced fuel valve response.
7. Fuel valve efficiency: The article mentions that when using OES sensors, the ACAC controller "tended to command the fuel valve more efficiently" compared to the PI controller.
While the article doesn't provide specific quantitative metrics, these qualitative comparisons were used to evaluate the performance of OES sensors against traditional pressure sensors for STLE control.
Control Algorithm Comparison
Based on the information provided in the article, the comparison between the Proportional-Integral (PI) and Characteristic Model-Based All-Coefficient Adaptive Control (ACAC) controllers was primarily qualitative. However, we can infer some metrics that were likely used to compare their performance:
1. Response to nominal operation: The article states that "ACAC and PI controllers behave similarly both during nominal operation."
2. Response to plant changes: The controllers were compared when changes were introduced, such as "mass addition downstream of the combustor and a reduction of fuel valve response."
3. Fuel valve efficiency: The article mentions that "the ACAC controller tended to command the fuel valve more efficiently."
4. Ability to maintain/track STLE location: This was likely assessed based on how well each controller maintained the desired STLE location during different duty cycles.
5. Stability during transient processes: While not explicitly compared, this was mentioned as an advantage of the ACAC controller.
The article doesn't provide specific scoring methods for these metrics. The comparison appears to be based on observational analysis rather than a formal scoring system. The researchers likely used their "expert judgment" to assess the relative performance of the controllers based on these criteria.
Eric Williamson, University of Virginia School of Engineering and Applied Science
This
is an artist’s depiction of a Hyper-X research vehicle under scramjet
power in free-flight following separation from its booster rocket. New
research into hypersonic jets may transform space travel by making
scramjet engines more reliable and efficient, leading to aircraft-like
spacecraft. Credit: NASA
Wind Tunnel Study Reveals Hypersonic Jet Engine Flow Can Be Controlled Optically
Researchers
at the University of Virginia are exploring the potential of hypersonic
jets for space travel, using innovations in engine control and sensing
techniques. The work, supported by NASA,
aims to enhance scramjet performance through adaptive control systems
and optical sensors, potentially leading to safer, more efficient space
access vehicles that function like aircraft.
The Future of Space Travel: Hypersonic Jets
What
if the future of space travel were to look less like Space-X’s
rocket-based Starship and more like NASA’s “Hyper-X,” the hypersonic jet
plane that, 20 years ago this year, flew faster than any other aircraft
before or since?
In 2004, NASA’s final X-43A unmanned prototype
tests were a milestone in the latest era of jet development — the leap
from ramjets to faster, more efficient scramjets. The last test, in
November of that year, clocked a world-record speed only a rocket could
have achieved previously: Mach 10. The speed equates to 10 times the
speed of sound.
NASA culled a lot of useful data from the tests,
as did the Air Force six years later in similar tests on the X-51
Waverider, before the prototypes careened into the ocean.
Although
hypersonic proof of concept was successful, the technology was far from
operational. The challenge was achieving engine control, because the
tech was based on decades-old sensor approaches.
NASA’s
B-52B launch aircraft cruises to a test range over the Pacific Ocean
carrying the third and final X-43A vehicle, attached to a Pegasus
rocket, on November 16, 2004. Credit: NASA / Carla Thomas
Breakthroughs in Hypersonic Engine Control
This month, however, brought some hope for potential successors to the X-plane series.
As
part of a new NASA-funded study, University of Virginia School of
Engineering and Applied Science researchers published data in the June
issue of the journal Aerospace Science and Technology that
showed for the first time that airflow in supersonic combusting jet
engines can be controlled by an optical sensor. The finding could lead
to more efficient stabilization of hypersonic jet aircraft.
In
addition, the researchers achieved adaptive control of a scramjet
engine, representing another first for hypersonic propulsion. Adaptive
engine control systems respond to changes in dynamics to keep the
system’s overall performance optimal.
“One of our national
aerospace priorities since the 1960s has been to build
single-stage-to-orbit aircraft that fly into space from horizontal
takeoff like a traditional aircraft and land on the ground like a
traditional aircraft,” said professor Christopher Goyne, director of the
UVA Aerospace Research Laboratory, where the research took place.
“Currently, the most state-of-the-art craft is the SpaceX
Starship. It has two stages, with vertical launch and landing. But to
optimize safety, convenience, and reusability, the aerospace community
would like to build something more like a 737.”
Doctoral
student Max Chern takes a closer look at the wind tunnel setup where
University of Virginia School of Engineering and Applied Science
researchers demonstrated that control of a dual-mode scramjet engine is
possible with an optical sensor. Credit: Wende Whitman, UVA Engineering
Goyne
and his co-investigator, Chloe Dedic, a UVA Engineering associate
professor, believe optical sensors could be a big part of the control
equation.
“It seemed logical to us that if an aircraft operates at
hypersonic speeds of Mach 5 and higher, that it might be preferable to
embed sensors that work closer to the speed of light than the speed of
sound,” Goyne said.
Additional members of the team were doctoral
student Max Chern, who served as the paper’s first author, as well as
former graduate student Andrew Wanchek, doctoral student Laurie Elkowitz
and UVA senior scientist Robert Rockwell. The work was supported by a
NASA ULI grant led by Purdue University.
Enhancing Scramjet Engine Performance
NASA
has long sought to prevent something that can occur in scramjet engines
called “unstart.” The term indicates a sudden change in airflow. The
name derives from a specialized testing facility called a supersonic
wind tunnel, where a “start” means the wind has reached the desired
supersonic conditions.
UVA has several supersonic wind tunnels,
including the UVA Supersonic Combustion Facility, which can simulate
engine conditions for a hypersonic vehicle traveling at five times the
speed of sound.
“We can run test conditions for hours, allowing us
to experiment with new flow sensors and control approaches on a
realistic engine geometry,” Dedic said.
Goyne explained that
“scramjets,” short for supersonic combustion ramjets, build on ramjet
technology that has been in common use for years.
This computational fluid dynamics image from the original Hyper-X tests shows the engine operating at Mach 7. Credit: NASA
Ramjets
essentially “ram” air into the engine using the forward motion of the
aircraft to generate the temperatures and pressures needed to burn fuel.
They operate in a range of about Mach 3 to Mach 6. As the inlet at the
front of the craft narrows, the internal air velocity slows down to
subsonic speeds in a ramjet combustion engine. The plane itself,
however, does not.
Scramjets are a little different, though. While
they are also “air-breathing” and have the same basic setup, they need
to maintain that super-fast airflow through the engine to reach
hypersonic speeds.
“If something happens within the hypersonic
engine, and subsonic conditions are suddenly created, it’s an unstart,”
Goyne said. “Thrust will suddenly decrease, and it may be difficult at
that point to restart the inlet.”
Testing a Dual-Mode Scramjet Engine
Currently,
like ramjets, scramjet engines need a step-up to get them to a speed
where they can intake enough oxygen to operate. That may include a ride
attached to the underside of a carrier aircraft as well as a rocket
boost.
The latest innovation is a dual-mode scramjet combustor,
which was the type of engine the UVA-led project tested. The dual engine
starts in ramjet mode at lower Mach numbers, then shifts into receiving
full supersonic airflow in the combustion chamber at speeds exceeding
Mach 5.
Preventing unstart as the engine makes that transition is crucial.
Christopher
Goyne, professor and director of the UVA Aerospace Research Laboratory,
and Chloe Dedic, associate professor. Credit: Wende Whitman, UVA
Engineering
Incoming wind interacts with the inlet walls in
the form of a series of shock waves known as a “shock train.”
Traditionally, the leading edge of those waves, which can be destructive
to the aircraft’s integrity, have been controlled by pressure sensors.
The machine can adjust, for example, by relocating the position of the
shock train.
But where the leading edge of the shock train resides
can change quickly if flight disturbances alter mid-air dynamics. The
shock train can pressurize the inlet, creating the conditions for
unstart.
So, “If you are sensing at the speed of sound, yet the
engine processes are moving faster than the speed of sound, you don’t
have very much response time,” Goyne said.
He and his collaborators wondered if a pending unstart could be predicted by observing properties of the engine’s flame instead.
Sensing the Spectrum of a Flame
The team decided to use an optical emission spectroscopy sensor for the feedback needed to control the shock train leading edge.
No
longer limited to information obtained at the engine’s walls, as
pressure sensors are, the optical sensor can identify subtle changes
both inside the engine and within the flow path. The tool analyzes the
amount of light emitted by a source — in this case, the reacting gases
within the scramjet combustor — as well as other factors, such as the
flame’s location and spectral content.
“The light emitted by the flame within the engine is due to relaxation of molecular species
that are excited during combustion processes,” explained Elkowitz, one
of the doctoral students. “Different species emit light at different
energies, or colors, offering new information about the engine’s state
that is not captured by pressure sensors.”
Current
UVA Engineering mechanical and aerospace doctoral students Laurie
Elkowitz and Max Chern were among the influential members of the team.
Credit: Wende Whitman, UVA Engineering
The team’s wind
tunnel demonstration showed that the engine control can be both
predictive and adaptive, smoothly transitioning between scramjet and
ramjet functioning.
The wind tunnel test, in fact, was the world’s
first proof that adaptive control in these types of dual-function
engines can be achieved with optical sensors.
“We were very
excited to demonstrate the role optical sensors may play in the control
of future hypersonic vehicles,” first author Chern said. “We are
continuing to test sensor configurations as we work toward a prototype
that optimizes package volume and weight for flight environments.”
Building Toward the Future
While
much more work remains to be done, optical sensors may be a component
of the future Goyne believes will be realized in his lifetime:
plane-like travel to space and back.
Dual-mode scramjets would
still require a boost of some sort to get the aircraft to at least Mach
4. But there would be the additional safety of not relying exclusively
on rocket technology, which requires highly flammable fuel to be carried
alongside large amounts of chemical oxidizer to combust the fuel.
That decreased weight would allow more room for passengers and payload.
Such
an all-in-one aircraft, which would glide back to Earth like the space
shuttles once did, might even provide the ideal combination of
cost-efficiency, safety and reusability.
“I think it’s possible,
yeah,” Goyne said. “While the commercial space industry has been able to
lower costs through some reusability, they haven’t yet captured the
aircraft-like operations. Our findings could potentially build on the
storied history of Hyper-X and make its space access safer than current
rocket-based technology.”
Reference: “Control of a dual-mode
scramjet flow path utilizing optical emission spectroscopy” by Max Y.
Chern, Andrew J. Wanchek, Laurie Elkowitz, Robert D. Rockwell, Chloe E.
Dedic and Christopher P. Goyne, 18 April 2024, Aerospace Science and Technology. DOI: 10.1016/j.ast.2024.109144
Control of a dual-mode scramjet flow path utilizing optical emission spectroscopy
Shock
train leading edge (STLE) location control within a Dual-Mode Scramjet
(DMSJ) flow path was demonstrated using an optical emission spectroscopy
(OES) sensor for control feedback. Emission from electronically excited
chemical species, OH⁎ and C,
was observed within the combustor and used for feedback to control the
STLE within the DMSJ isolator. An optical emission sensor was used to
experimentally demonstrate STLE control using a Proportional-Integral
(PI) controller. Feedback using this sensor was compared the traditional
approach of using wall pressure sensors. Utilizing an OES sensor for
feedback proved to be an effective method to estimate and control the
STLE location and provided a smoother response than when utilizing
discrete wall-based pressure measurements that tended to discretize the
estimated STLE location. Characteristic Model-Based All-Coefficient
Adaptive Control (ACAC) was also implemented and compared to the PI
controller response using the OES signal as feedback. Plant changes were
introduced in the form of mass addition downstream of the combustor and
a reduction of fuel valve response. Experiments showed that the ACAC
and PI controllers behave similarly both during nominal operation and
with plant changes, though the ACAC controller tended to command the
fuel valve more efficiently. This paper presents the first experimental
demonstration of closed-loop control of the STLE location within a
scramjet flow path when using OES sensor feedback. This is also the
first demonstration of adaptive control of STLE location utilizing OES
sensor feedback.
Access through your organization
Check access to the full text by signing in through your organization.
Scramjet
technology has the potential to propel hypersonic military systems and
reusable launch vehicles for responsive space access. Concepts such as
the NASA X-43A have been developed to demonstrate the practical use of
scramjets [1], [2]. A special type of scramjet, known as a dual-mode
scramjet (DMSJ), makes ample use of an isolator upstream of the
combustor. The isolator reduces the incoming flow to subsonic conditions
via a shock train. This extends the operational range of the engine by
allowing it to operate in a ramjet mode. Therefore, between about Mach 3
and 6, a DMSJ typically operates with a precombustion shock train in
the isolator and with subsonic fuel-air mixing and combustion in the
combustor [3].
The isolator shock train works to match
the scramjet inlet pressure with the higher downstream combustor
pressure. The location of the shock train leading edge (STLE) is
indicative of engine conditions and is sensitive to inlet, and combustor
conditions [4]. As the combustor pressure increases relative to the
inlet pressure, the shock train increases in length. The STLE can travel
forward into the scramjet inlet, causing subsonic flow in the inlet,
during an event called unstart. Unstart is detrimental to a flight
vehicle as it greatly reduces mass flow into the engine, which in turn,
reduces thrust and increases drag [5]. Unstart prevention is required
for hypersonic air-breathing engines as an unexpected unstart event
could lead to the loss of the vehicle. A common, passive method to
reduce the risk of unstart is to increase the length of the isolator,
but this also increases the overall engine length and weight.
Alternatively, active control of the STLE offers another approach for
maximizing the operational envelope and the performance of the engine.
Many
different active control methods for preventing unstart have been
proposed including boundary layer suction [6], mass addition via vortex
generator jets [7], plasma actuation [8], back pressure regulation via
ramp actuation downstream of the combustor [9], [10], and active fueling
control [11], [12], [13], [14]. An advantage to using active fueling
control for unstart prevention is that it can largely use the existing
fuel system, reducing complexity of the hardware design and fabrication.
Controlling fuel as a method of controlling STLE location is possible
because of the pressure balance taking place within the engine. As the
fueling rate increases, increased heat release in the combustor leads to
an increase in combustor pressure, which moves the STLE forward in the
isolator. The opposite occurs when the fueling rate is decreased.
Coupling of the combustion reaction with the overall engine state and
STLE location enables the utilization of combustor sensors as feedback
for STLE estimation and control.
Pressure sensors are
typically used to measure combustor state, but recent studies have shown
benefits of using optical sensors [14], [15]. Optical emission
spectroscopy (OES) provides a spectrally-resolved measurement of the
light emitted following the formation of electronically-excited species
in a chemical reaction. During ethylene combustion, or any hydrocarbon
combustion process, electronically excited OH⁎, CH⁎ and C
are formed as intermediate species [16]. The relative intensities of
light emitted from these species have been shown to be correlated with
local fueling equivalence ratio (ER) in partially premixed flames [17].
Such an established correlation can be used to control ER or STLE
location in a scramjet, through closed-loop control approaches. Fig. 1
visualizes how OES measurements may be used to control the STLE in a
scramjet in such a control loop. It depicts two different measured
emission spectra that correspond to two different STLE locations, at a
different equivalence ratios in the scramjet combustor. As the ER
increases, the spectra measured from the flame chemiluminescence
changes. This corresponds to a change in shock train length and STLE
location. The key advantages of using OES for sensing of engine
parameters is the ability to obtain direct measurements of the
combustion process itself rather than its effect, i.e. isolator pressure
rise, as well as the faster sampling rate that an optical sensor can
provide. Previous work has applied this concept to DMSJ control by using
the ratio of integrated C and OH⁎
emission intensities to control fuel pressure and, hence, combustor
global ER [14]. Although Refs. [14] and [15] establish a link between
the OES signal at a specific location and global ER, and STLE control
using OES has been proposed in previous work. [18], the concept has not
yet been experimentally demonstrated.
If inlet
conditions of a scramjet remain constant, a correlation between the
combustor state, as estimated by an OES sensor, and STLE location may be
established. However, this is not always the case during flight. Many
factors can cause inlet conditions to change during flight, such as
changes in altitude, weather [19], and Angle-of-Attack (AoA) of the
vehicle [20], [21]. These factors will cause the STLE behavior to
deviate from the baseline condition and may lead to unfavorable
controller responses. Other changes, such as hardware malfunctions (i.e.
a reduction in valve response), engine geometry change due to thermal
expansion, or changes in the optical transmission of windows will
likewise cause a change in the plant of the control system. Ultimately,
an engine control system on a flight vehicle may use multiple sensor
types for feedback. The goal of this work is to demonstrate that the
relationship between OES sensor measurements and STLE location is
sufficient for closed loop control of the STLE and to explore the
advantages of using OES sensors over traditional pressure measurement.
An
adaptive controller may be required to mitigate issues that arise from
uncertainties in scramjet operability and performance of a real flight
system. Therefore, a Model Reference Adaptive Controller (MRAC) was
examined in this study; an approach called “Characteristic Model-Based
All-Coefficient Adaptive Control (ACAC).” [22] MRAC is typically
deployed on plants where uncertainties can be in the form of
degradation, uncertain plant dynamics, or other unmodeled phenomenon
[23]. Direct MRAC utilizes adaptive parameters to adjust controller
gains and provide a defined system performance, whereas indirect MRAC
utilizes adaptive parameters to model the system dynamics, and are used
directly in the control law. The form of MRAC used in this work, ACAC,
is an indirect method developed by Wu and Xie [24] and has been applied
to many engineering plants [22], including the control of a high-speed
rotor supported on magnetic bearings [25] and simulated altitude control
for the X-34 launch vehicle [26]. ACAC was selected for use in the
current study due to its relatively simple implementation and its
stability during transient processes [24]. ACAC utilizes a plant
estimation method known as characteristic modeling to provide a
real-time estimation of the plant response. This estimation is then
implemented directly into the golden section adaptive control law to
provide a robustly stable system, even during transient processes [24].
Other
control laws can be implemented to aid in the performance of the ACAC
controller while taking advantage of its stability properties.
Maintaining/tracking, derivative, and integral control laws have been
employed with ACAC to provide good closed-loop performance for
engineering systems [22]. Although ACAC has been around since the
1990's, there is little experimental results published using the
controller. This study provides an experimental analysis of this
controller when compared to the more standard proportional-integral (PI)
controller. Due to the uncertain and non-linear nature of DMSJ
operation and link to combustor emission, ACAC is a good candidate for
the control of the DMSJ flowpath using the OES sensors as feedback due
to its adaptive qualities.
This study builds on the
work of Ref. [18] where the possibility of utilizing OES for STLE
control was explored through simulation. The present study has two
objectives: (1) demonstrate experimental closed-loop control of the STLE
location in a scramjet isolator flow path utilizing OES feedback, and
(2) explore adaptive control in comparison to a standard PI control
method for this system. This is the first demonstration of closed-loop
control of the STLE location within a scramjet isolator flow path
utilizing OES sensor feedback. This is also the first time adaptive
control was experimentally demonstrated utilizing OES sensor feedback
from a scramjet combustor.
In this paper, the
experimental facility setup is described in two parts: subsection 2.1
provides details of the facility conditions, instrumentation, actuators,
and control hardware, and subsection 2.2 provides details on the method
of optical emission sensing and its synchronization with the control
system. The control setup is then described in four parts: subsection
3.1 explains the method by which pressure measurements within the
isolator were used to estimate the STLE, subsection 3.2 explains how
these estimations were used in conjunction with OES measurements to
provide calibrations for the control loop, subsection 3.3 explains the
closed loop control loop that was implemented, and finally, subsection
3.4 explains the control law of the ACAC controller. The results and
discussion section begins with a brief outline of the test conditions
and duty cycles that were examined and then the results are examined in
two parts: subsection 4.1, which discusses the performance of the
controller using OES feedback in comparison to pressure sensors, and
subsection 4.2, which discusses the performance of the ACAC controller.
Finally, a conclusion summarizes the results and impacts of this work.
Section snippets
Facility and instrumentation
The
experiments in this paper were conducted at the University of Virginia
Supersonic Combustion Facility (UVASCF). The facility consists of an
electrically heated, continuous-flow, direct-connect wind tunnel which
was used to provide air at 1200 K and 300 kPa total conditions to a DMSJ
flow path through a Mach 2 nozzle. This condition simulates the engine
inflow Mach number and enthalpy of a flight vehicle at approximately
Mach 5 [27], [28]. The facility conditions are measured and recorded on
Control setup
This
section details the methods by which closed loop STLE control was
achieved. This begins with a description of how the STLE was detected,
followed by the method by which the OES signal was calibrated to the
STLE. Details are then provided on how the control loop was implemented,
followed by the control law of the ACAC controller.
Results and discussion
In
order to fully examine control effectiveness, two duty cycles with step
changes of reference STLE location was exercised with each controller:
the “Constant Step Size” duty cycle with steps of ±5 x/H between -20 and
-35 x/H, and the “Differing Step Size” duty cycle with steps of
different sizes between -20 and -35 x/H. The time between the step
changes was 3 seconds to allow the STLE location to stabilize at each
location. Each duty cycle was repeated at least twice for each
controller at
Conclusion
The
control of the STLE location using OES sensor feedback was
experimentally demonstrated and was shown to be a viable approach.
Utilizing the OES sensor as feedback provided a comparable response to
using the PRM as feedback. Bias in STLE location was noticed during
control with the PRM, though, as the algorithm favored locations near
pressure measurements. This bias caused a difference between PRM and OES
sensor estimated STLE location. Introducing flow imaging of the
isolator flow path was
CRediT authorship contribution statement
Max Y. Chern: Data curation, Investigation, Writing – original draft, Writing – review & editing. Andrew J. Wanchek: Writing – original draft, Software, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Laurie Elkowitz: Methodology, Formal analysis, Data curation, Validation, Writing – original draft. Robert D. Rockwell: Data curation, Investigation, Supervision, Validation. Chloe E. Dedic: Methodology, Investigation, Funding acquisition, Conceptualization, Project
Declaration of Competing Interest
The
authors declare the following financial interests/personal
relationships which may be considered as potential competing interests:
Andrew Wanchek reports financial support was provided by NASA Aeronautics Research Mission Directorate. Max Chern reports financial support was provided by NASA Aeronautics Research Mission Directorate. Laurie Elkowitz reports financial support was provided by National Defense Science and Engineering Graduate Fellowship. If there are other authors, they declare
Acknowledgements
The
authors would like to thank Jack Donnellan and Joe Fritch from GE
Aerospace for their support. This work was financially supported by
NASA's Space Technology Research Grants Program (NASA ULI Grant #80NSSC21M0069 P00001). L. Elkowitz was supported by the National Defense Science and Engineering Graduate (NDSEG) Fellowship.
The proliferation of drones, or unmanned aerial vehicles
(UAVs), has raised significant safety concerns due to their potential
misuse in activities such as espionage, smuggling, and infrastructure
disruption. This paper addresses the critical need for effective drone
detection and classification systems that operate independently of UAV
cooperation.
We evaluate various convolutional neural networks (CNNs)
for their ability to detect and classify drones using spectrogram data
derived from consecutive Fourier transforms of signal components. The
focus is on model robustness in low signal-to-noise ratio (SNR)
environments, which is critical for real-world applications.
A
comprehensive dataset is provided to support future model development.
In addition, we demonstrate a low-cost drone detection system using a
standard computer, software-defined radio (SDR) and antenna, validated
through real-world field testing. On our development dataset, all models
consistently achieved an average balanced classification accuracy of
>= 85% at SNR > -12dB.
In the field test, these models achieved an
average balance accuracy of > 80%, depending on transmitter distance
and antenna direction. Our contributions include: a publicly available
dataset for model development, a comparative analysis of CNN for drone
detection under low SNR conditions, and the deployment and field
evaluation of a practical, low-cost detection system.
Comments:
11 pages, submitted to IEEE Open Journal of Signal Processing
Subjects:
Signal Processing (eess.SP); Machine Learning (cs.LG)
From: Stefan Glüge [view email] [v1]
Wed, 26 Jun 2024 12:50:55 UTC (16,275 KB)
Summary
The paper focuses on radio frequency (RF) signals from several types of consumer and hobbyist drones. Specifically:
1. The development dataset included signals from 6 drones and 4 remote controllers:
- DJI Phantom 4 Pro drone and its remote control
- Futaba T7C remote control
- Futaba T14SG remote control and R7008SB receiver
- Graupner mx-16 remote control and GR-16 receiver
- Taranis ACCST X8R Receiver
- Turnigy 9X remote control
2. The signals were recorded in the 2.4 GHz ISM band, which is commonly used by consumer drones for communication between the drone and its remote control.
3. The paper mentions that these signals occur in short bursts, typically 1.3-2 ms long, with repetition periods ranging from about 60 to 600 ms depending on the specific drone model.
4. The signals were recorded at a sampling frequency of 56 MHz initially, then downsampled to 14 MHz for processing.
5. For the field test, they used a slightly different set of drones/controllers, including:
- DJI Phantom Pro 4 drone and remote
- Futaba T14 remote control
- Futaba T7 remote control
- FrySky Taranis Q X7 remote control
- Turnigy Evolution remote control
The focus was on the RF signals emitted by these consumer-grade drones and their remote controls, as detecting these signals can indicate the presence of a drone even when it's not visible. This paper presents research on detecting and classifying drones using radio frequency (RF) signals and convolutional neural networks (CNNs). Key points include:
1. The authors developed CNN models to detect and classify drones using spectrogram data from RF signals.
2. They focused on model robustness in low signal-to-noise ratio (SNR) environments, which is important for real-world applications.
3. A comprehensive dataset was created and made publicly available to support future research.
4. The authors implemented a low-cost drone detection system using standard computer hardware, software-defined radio, and an antenna.
5. In lab tests, the models achieved ≥85% balanced accuracy at SNRs above -12 dB.
6. Field tests showed >80% average balanced accuracy, varying based on transmitter distance and antenna direction.
7. The simplest model (VGG11 BN) performed as well as more complex models.
8. Most misclassifications occurred between noise and drone signals, rather than between different drone types.
9. The system could reliably detect drones up to 670 meters away in real-world conditions.
10. Limitations included potential interference in field tests and the inability to detect multiple imultaneous transmitters.
The research demonstrates the feasibility of using CNNs for drone detection with RF signals in challenging real-world conditions, while also providing resources for further research in this area.
The paper primarily focused on using variations of the Visual Geometry Group (VGG) CNN architecture. Specifically:
1. VGG11 BN 2. VGG13 BN 3. VGG16 BN 4. VGG19 BN
Key points about these models:
- The "BN" suffix indicates that these versions include batch normalization layers after the convolutions.
- The main idea of the VGG architecture is to use multiple layers of small (3x3) convolutional filters instead of larger ones.
- The number in each model name (11, 13, 16, 19) refers to the number of layers with weights in the network.
- For the dense classification layer, they used 256 linear units followed by 7 linear units at the output (one unit per class).
- The models were trained on 2D spectrogram data derived from the RF signals.
- Interestingly, the authors found no significant performance advantage in using the more complex models (like VGG19 BN) over the simplest model (VGG11 BN) for this specific task.
- The VGG11 BN model, being the least complex, required the least number of training epochs to achieve optimal performance on the validation set.
The authors chose these VGG variants due to their proven effectiveness in image classification tasks, adapting them to work with the 2D spectrogram representation of the RF signals. They focused on comparing the performance of these different VGG variants rather than exploring other types of CNN architectures.
Corresponding author: Stefan Glüge (email: stefan.gluege@zhaw.ch).
Robust Low-Cost Drone Detection and Classification in Low SNR Environments
Stefan Glüge
Matthias Nyfeler
Ahmad Aghaebrahimian
Nicola Ramagnano
and Christof Schüpbach
Fellow
IEEE
Institute of Computational Life Sciences, Zurich University of Applied Sciences, 8820 Wädenswil, Switzerland
Institute for Communication Systems, Eastern Switzerland University of Applied Sciences, 8640 Rapperswil-Jona, Switzerland
Armasuisse Science + Technology, 3602 Thun, Switzerland
Abstract
The proliferation of drones, or unmanned aerial vehicles (UAVs),
has raised significant safety concerns due to their potential misuse in
activities such as espionage, smuggling, and infrastructure disruption.
This paper addresses the critical need for effective drone detection
and classification systems that operate independently of UAV cooperation. We evaluate various convolutional neural networks (CNNs)
for their ability to detect and classify drones using spectrogram data
derived from consecutive Fourier transforms of signal components. The
focus is on model robustness in low signal-to-noise ratio (SNR)
environments, which is critical for real-world applications. A
comprehensive dataset is provided to support future model development.
In addition, we demonstrate a low-cost drone detection system using a
standard computer, software-defined radio (SDR)
and antenna, validated through real-world field testing. On our
development dataset, all models consistently achieved an average
balanced classification accuracy of at SNR dB. In the field test, these models achieved an average balance accuracy of ,
depending on transmitter distance and antenna direction. Our
contributions include: a publicly available dataset for model
development, a comparative analysis of CNN for drone detection under low SNR conditions, and the deployment and field evaluation of a practical, low-cost detection system.
{IEEEkeywords}
Deep neural networks, Robustness, Signal detection, Unmanned aerial vehicles
1 INTRODUCTION
\IEEEPARstart
Drones, or civil UAVs,
have evolved from hobby toys to commercial systems with many
applications. In particular, mini/amateur drones have become ubiquitous.
With the proliferation of these low-cost, small and easy-to-fly drones,
safety issues have became more pressing (e.g. spying, transfer of
illegal or dangerous goods, disruption of infrastructure,
assault). Although regulations and technical solutions (such as
transponder systems) are in place to safely integrate UAVs into the airspace, detection and classification systems that do not rely on the cooperation of the UAV are necessary. Various technologies such as audio, video, radar, or radio frequency (RF) scanners have been proposed for this task [1].
In this paper, we evaluate different CNNs
for drone detection and classification using the spectrogram data
computed with consecutive Fourier transforms for the real and imaginary
parts of the signal. To facilitate future model development, we make the
dataset publicly available. In terms of performance, we focus on the
robustness of the models to low SNRs,
as this is the most relevant aspect for a real-world application of the
system. Furthermore, we evaluate a low-cost drone detection system
consisting of a standard computer, SDR, and antenna in a real-world field test.
Our contributions can therefore be summarised as follows:
• We provide the dataset used to
develop the model. Together with the code to load and transform the
data, it can be easily used for future model development.
• We compare different CNNs using D spectrogram data for detection and classification of drones based on their RF signals under challenging conditions, i.e. low SNRs down to dB.
• We visualise the model embeddings to
understand how the model clusters and separates different classes, to
identify potential overlaps or ambiguities, and to examine the
hierarchical relationships within the learned features.
• We implement the models in a low-cost detection system and evaluate them in a field test.
1.1 RELATED WORK
A literature review on drone detection methods based on deep learning (DL) is given in [1] and [2]. Both works reflect the state of the art in . Different DL algorithms are discussed with respect to the techniques used to detect drones based on visual, radar, acoustic, and RF signals. Given these general overviews, we briefly summarise recent work based on RF data, with a particular focus on the data side of the problem to motivate our work.
With the advent of DL-based methods, the data used to train models became the cornerstone of any detection system. Table 1 provides an overview of openly available datasets of RF drone signals. The DroneRF dataset [3] is one of the first openly available datasets. It contains RF time series data from three drones in four flight modes (i.e. on, hovering, flying, video recording) recorded by two universal software radio peripheral (USRP)SDR transceivers [4]. The dataset is widely used and enabled follow-up work with different approaches to classification systems, i.e. DL-based [5, 6], focused on pre-processing and combining signals from two frequency bands [7], genetic algorithm-based heterogeneous integrated k-nearest neighbour [8], and hierarchical reinforcement learning-based [9]. In general, the classification accuracies reported in the papers on the DroneRF dataset are close to . Specifically, [4], [5], and [6] report an average accuracy of , , and , respectively, to detect the presence of a drone. There is therefore an obvious need for a harder, more realistic dataset.
Consequently, [10]
investigate the detection and classification of drones in the presence
of Bluetooth and Wi-Fi signals. Their system used a multi-stage detector
to distinguish drone signals from the background noise and interfering
signals. Once a signal was identified as a drone signal, it was
classified using machine learning (ML) techniques. The detection performance of the proposed system was evaluated for different SNRs. The corresponding recordings ( drone controls from eight different manufacturers) are openly available [11]. Unfortunately, the Bluetooth/Wi-Fi noise is not part of the dataset. Ozturk et al. [12] used the dataset to further investigate the classification of RF fingerprints at low SNRs by adding white Gaussian noise to the raw data. Using a CNN, they achieved classification accuracies ranging from to for SNR dB.
The openly available DroneDetect dataset [13] was created by Swinney and Woods [14]. It contains raw in-phase and quadrature (IQ) data recorded with a BladeRF SDR.
Seven drone models were recorded in three different flight modes (on,
hovering, flying). Measurements were also repeated with different types
of noise, such as interference from a Bluetooth speaker, a Wi-Fi
hotspot, and simultaneous Bluetooth and Wi-Fi interference. The dataset
does not include measurements without drones, which would be necessary
to evaluate a drone detection system. The results in [14]
show that Bluetooth signals are more likely to interfere with detection
and classification accuracy than Wi-Fi signals. Overall, frequency
domain features extracted from a CNN were shown to be more robust than time domain features in the presence of interference. In [15] the drone signals from the DroneDetect dataset were augmented with Gaussian noise and SDR
recorded background noise. Hence, the proposed approach could be
evaluated regrading its capability to detect drones. They trained a CNN end-to-end on the raw IQ data and report an accuracy of for detection and between and for classification.
The Cardinal RF dataset [16]
consists of the raw time series data from six drones + controller, two
Wi-Fi and two Bluetooth devices. Based on this dataset, Medaiyese et
al. [17] proposed a semi-supervised framework for UAV detection using wavelet analysis. Accuracy between and was achieved at SNRs of dB and dB, while it dropped to chance level for SNRs below dB to dB. In addition, [18] investigated different wavelet transforms for the feature extraction from the RF signals. Using the wavelet scattering transform from the steady state of the RF signals at dB SNR to train SqueezeNet [19], they achieved an accuracy of at dB SNR.
In our previous work [20], we created the noisy drone RF signals dataset11https://www.kaggle.com/datasets/sgluege/noisy-drone-rf-signal-classification from six drones and four remote controllers. It consists of non-overlapping signal vectors of samples, corresponding to ms at MHz. We added Labnoise (Bluetooth, Wi-Fi, Amplifier) and
Gaussian noise to the dataset and mixed it with the drone signals with SNR dB. Using IQ data and spectrogram data to train different CNNs, we found an advantage in favour of the D spectrogram representation of the data. There was no performance difference at SNR dB but a major improvement in the balanced accuracy at low SNR levels, i.e. % on the spectrogram data compared to % on the IQ data at dB SNR.
Recently, [21] proposed an anchor-free object detector based on keypoints for drone RF
signal spectograms. They also proposed an adversarial learning-based
data adaptation method to generate domain independent and domain aligned
features. Given five different types of drones, they report a mean
average precision of , which drops to when adding Gaussian noise with dB SNR. The raw data used in their work is available22https://www.kaggle.com/datasets/zhaoericry/drone-rf-dataset, but yet, unfortunately not usable without any further documentation.
Table 1: Overview on openly available drone RF datasets.
As we have seen in other fields, such as computer vision, the success of DL
can be attributed to: (a) high-capacity models; (b) increased
computational power; and (c) the availability of large amounts of
labelled data [22]. Thus, given the large amount of available raw RF signals (cf. Tab. 1) we promote the idea of open and reusable data, to facilitate model development and model comparison.
With the noisy drone RF signals dataset [20],
we have provided a first ready-to-use dataset to enable rapid model
development, without the need for any data preparation. Furthermore, the
dataset contains samples that can be considered as “hard” in terms of
noise, i.e. Bluetooth + Wi-Fi + Gaussian noise at very low SNRs, and allows a direct comparison with the published results.
While the models proposed in [20]
performed reasonably well in the training/lab setting, we found it
difficult to transfer their performance to practical application. The
reason was the choice of rather short signal vectors of samples, corresponding to ms at MHz. Since the drone signals occur in short bursts of ms with a repetition period of ms,
our continuously running classifier predicts a drone whenever a burst
occurs and noise during the repetition period of the signal. Therefore,
in order to provide a stable and reliable classification per every
second, one would need an additional “layer” to pool the classifier
outputs given every ms.
In the present work, we follow a data-centric approach and simply increase the length of the input signal to ms
to train a classifier in an end-to-end manner. Again, we provide the
data used for model development in the hope that it will inspire others
to develop better models.
In the next section, we briefly describe the data collection and preprocessing procedure. Section 3
describes the model architectures and their training/validation method.
In addition, we describe the setup of a low-cost drone detection system
and of the field test. The resulting performance metrics are presented
in Section 4 and are further discussed in Section 5.
2 MATERIALS
We used the raw RF signals from the drones that were collected in [20].
Nevertheless, we briefly describe the data acquisition process again to
provide a complete picture of the development from the raw RF signal to the deployment of a detection system within a single manuscript.
2.1 DATA ACQUISITION
The drone’s remote control and, if present, the drone itself were placed in an anechoic chamber to record the raw RF
signal without interference for at least one minute. The signals were
received by a log-periodic antenna and sampled and stored by an Ettus
Research USRP B210, see Fig. 1.
In the static measurement, the respective signals of the remote control
(TX) alone or with the drone (RX) were measured. In the dynamic
measurement, one person at a time was inside the anechoic chamber and
operated the remote control (TX) to generate a signal that is as close
to reality as possible. All signals were recorded at a sampling
frequency of MHz (highest possible real-time bandwidth). All drone models and recording parameters are listed in Tab. 2, including both uplink and downlink signals.
Table 2: Transmitters
and receivers recorded in the development dataset and their respective
class labels. Additionally, we show the center frequency (GHz), the
channel spacing (MHz), the burst duration (ms), and the repetition
period of the respective signals (ms).
Transmitter
Receiver
Label
Center Freq. (GHz)
Spacing (MHz)
Duration (ms)
Repetition (ms)
DJI Phantom GL300F
DJI Phantom 4 Pro
DJI
Futaba T7C
-
FutabaT7
Futaba T14SG
Futaba R7008SB
FutabeT14
Graupner mx-16
Graupner GR-16
Graupner
Bluetooth/Wi-Fi Noise
-
Noise
Taranis ACCST
X8R Receiver
Taranis
Turnigy 9X
-
Turnigy
, - a
a The repetition period of the Turnigy transmitter is not static. First bursts were observed after ms, the following signal bursts were observed in the interval ms
We also recorded three types of noise
and interference. First, Bluetooth/Wi-Fi noise was recorded using the
hardware setup described above. Measurements were taken in a public and
busy university building. In this open recording setup, we had no
control over the exact number or types of active Bluetooth/Wi-Fi devices
and the actual traffic in progress.
Second, artificial white Gaussian noise was used, and third, receiver noise was recorded for seconds from the USRP at various gain settings ( db in steps of dB)
without the antenna attached. This should prevent the final model from
misclassifying quantisation noise in the absence of a signal, especially
at low gain settings.
2.2 DATA PREPARATION
To reduce memory consumption and computational effort, we reduced the bandwidth of the signals by downsampling from MHz to MHz using the SciPy [23] signal.decimate function with an th order Chebyshev type I filter.
The drone signals occur in short bursts with some low power gain or background noise in between (cf. Tab. 2). We divided the signals into non-overlapping vectors of samples ( ms)
and only vectors containing a burst, or at least a partial burst, were
used for the development dataset. This was achieved by applying an
energy threshold. As the recordings were made in an echo-free chamber,
the signal burst is always clearly visible. Hence, we only used vectors
that contained a portion of the signal whose energy was above the
threshold, which was arbitrarily set at of the average energy of the entire recording.
The selected drone signal vectors with were normalised to a carrier power of per sample, i.e. only the part of the signal vector containing drone bursts was considered for the power calculation ( samples out of ).
This was achieved by identifying the bursts as those samples where a
smoothed energy was above a threshold. The signal vectors are thus normalised by
(1)
Noise vectors (Bluetooth, Wi-Fi, Amplifier, Gauss) with samples were normalised to a mean power of with
(2)
Finally, the normalised drone signal vectors were mixed with the normalised noise vectors by
(3)
to generate the noisy drone signal vectors at different SNRs.
As described in Sec. 2.2, the drone signals were mixed with noise. More specifically, of the drone signals were mixed with Labnoise (Bluetooth + Wi-Fi + Amplifier) and
with Gaussian noise. In addition, we created a separate noise class by
mixing Labnoise and Gaussian noise in all possible combinations (i.e.,
Labnoise Labnoise, Labnoise Gaussian noise, Gaussian noise Labnoise, and Gaussian noise Gaussian noise). For the drone signal classes, as for the noise class, the number of samples for each SNR level was evenly distributed over the interval of SNRs dB in steps of dB, i.e., - samples per SNR level. The resulting number of samples per class is given in Tab. 3.
Table 3: Number of samples in the different classes in the development dataset.
Class
DJI
FutabaT14
FutabaT7
Graupner
Taranis
Turnigy
Noise
#samples
In our previous work [20] we found an advantage in using the spectrogram representation of the data compared to the IQ representation, especially at low SNRs levels. Therefore, we transform the raw IQ signals by computing the spectrum of each sample with consecutive Fourier transforms with non-overlapping segments of length for the real and imaginary parts of the signal. That is, the two IQ signal vectors () are represented as two matrices (). Fig. 2 shows four samples of the dataset at different SNRs. Note that we have plotted the log power spectrogram of the complex spectrum as
(4)
2.4 DETECTION SYSTEM PROTOTYPE
For field use, a system based on a mobile computer was used as shown in Fig. 3 and illustrated in Fig. 4. The RF signals were received using a directional left-hand circularly polarised antenna (H&S SPA 2400/70/9/0/CP).
The antenna gain of dBi and the front-to-back ratio of dB
helped to increase the detection range and to attenuate the unwanted
interferers in the opposite direction. Circular polarisation has been
chosen to eliminate the alignment problem as the transmitting antennas
have a linear polarisation. The USRP B210 was used to down-convert and digitise the RF signal at a sampling rate of Msps. On the mobile computer, the GNU Radio program collected the baseband IQ
samples in batches of one second and send one batch at a time to our
PyTorch model, which classified the signal. To speed up the computations
in the model we utilised an Nvidia GPU in computer. The classification
results were then visualised in real time in a dedicated GUI.
Figure 3: Block diagram of the mobile drone detection system.
Figure 4: Detection prototype at the Zurich Lake in Rapperswil.
3 METHODS
3.1 MODEL ARCHITECTURE AND TRAINING
As in [20] we chose the Visual Geometry Group (VGG)CNN architecture [24]. The main idea of this architecture is to use multiple layers of small ()
convolutional filters instead of larger ones. This is intended to
increase the depth and expressiveness of the network, while reducing the
number of parameters. There are several variants of this architecture,
which differ in the number of convolutional layers ( and , respectively). We used a variant with a batch normalisation [25] layer after the convolutions, denoted as VGG11_BN to VGG19_BN. For the dense classification layer, we used linear units followed by linear units at the output (one unit per class).
A stratified -fold train-validation-test split was used as follows. In each fold, we trained a network using and
of the available samples of each class for training and testing,
respectively. Repeating the stratified split five times ensures that
each sample was in the test set once in each experiment. Within the
training set, of the samples were used as the validation set during training.
Model training was performed for epochs with a batch size of . The PyTorch [26] implementation of the Adam algorithm [27] was used with a learning rate of , betas and weight decay of .
3.2 MODEL EVALUATION
During training, the model was
evaluated on the validation set after each epoch. If the balanced
accuracy on the validation set increased, it was saved. After training,
the model with the highest balanced accuracy on the validation set was
evaluated on the withheld test data. The performance of the models on
the test data was accessed in terms of classification accuracy and
balanced accuracy.
As accuracy simply measures the
proportion of correct predictions out of the total number of
observations, it can be misleading for unbalanced datasets. In our case,
the noise class is over-represented in the dataset (cf. Tab. 3).
Therefor, we also report the balanced accuracy, which is defined as the
average of the recall obtained for each class, i.e. it gives equal
weight to each class regardless of how frequent or rare it is.
3.3 VISUALISATION OF MODEL EMBEDDINGS
Despite their effectiveness, CNNs are often criticised for being “black boxes”. Understanding the feature representations, or embeddings, learned by the CNN
helps to demystify these models and provide some understanding of their
capabilities and limitations.
In general, embeddings are high-dimensional vectors generated by the
intermediate layers that capture essential patterns from the input data.
In our case, we chose the least
complex VGG11_BN model to visualise its embeddings. When inferencing the
test data, we collected the activations at the last dense
classification layer, which consists of units. Given test samples, this results in a matrix. Using t-distributed Stochastic Neighbor Embedding (t-SNE) [28] and Uniform Manifold Approximation and Projection (UMAP) [29]
as dimensionality reduction techniques, we project these
high-dimensional embeddings into a lower-dimensional space, creating
interpretable visualisations that reveal the model’s internal data
representations.
Our goals were to understand how the
model clusters and separates different classes, to identify potential
overlaps or ambiguities, and to examine the hierarchical relationships
within the learned features.
3.4 DETECTION SYSTEM FIELD TEST
We conducted a field test of the
detection system in Rapperswil at the Zurich Lake. The drone detection
prototype was placed on the shore (cf. Fig. 4)
in line of sight of a wooden boardwalk across the lake, with no
buildings to interfere with the signals. The transmitters were mounted
on a m
long wooden pole. The signals from the transmitters were recorded (and
classified in real time) at four positions along the walkway at
approximately m, m, m and m from the detection system. Figure 5 shows an overview of the experimental setup.
At each recording position, we measured with the directional antenna at three different angles, i.e. at – facing the drones and/or remote controls, at – perpendicular to the direction of the transmitters, and at – in the opposite direction. Directing the antenna in the opposite direction should result in dB attenuation of the radio signals.
Figure 5: Experimental
measurement setup at the Zurich Lake in Rapperswil. One can see the
four recording positions along the wooden walkway and the detection
system positioned at the lake side. Further, recordings were done at
different angels of the directional antenna indicated by the arrows at
the detection system.
Table 4
lists the drones and/or remote controls used in the field test. Note
that the Graupner drone and remote control are part of the development
dataset (cf. Tab. 2),
but were not measured in the field experiment. We assume that no other
drones were present during the measurements, so recordings where none of
our transmitters were used are labelled as “Noise”.
Table 4: Drones and/or remotes used in the field test
For each transmitter, distance, and angle, to s, or approximately
spectrograms were live classified and recorded. The resulting number of
samples for each class, distance, and angle are shown in Tab. 5.
Table 5: Number of samples (#samples) for each class, distance and antenna direction (angle) recorded in the field test. Recordings at m distance have no active transmitter and were therefore labelled “Noise”
class
#samples
Distance [m]
#samples
Angle [∘]
#samples
DJI
FutabaT14
FutabaT7
Noise
Taranis
Turnigy
4 RESULTS
4.1 CLASSIFICATION PERFORMANCE ON THE DEVELOPMENT DATASET
Table 6 shows the general mean standard deviation of accuracy and balanced accuracy on the test data of the development dataset (cf. Sec. 2.3), obtained in the -fold cross-validation of the different models.
There is no meaningful difference in
performance between the models, even when the model complexity increases
from VGG11_BN to VGG19_BN. The number of epochs for training (#epochs)
shows when the highest balanced accuracy was reached on the validation
set. It can be seen that the least complex model, VGG11_BN, required the
least number of epochs compared to the more complex models. However,
the resulting classification performance is the same.
Table 6: Mean standard deviation of the accuracy (Acc.) and the balanced accuracy (balanced Acc.) obtained in -fold
cross-validation of the different models on the test data of the
development dataset. An indication of the model training time is given
with the mean
standard deviation of the number of training epochs (#epochs),
i.e. when the highest balanced accuracy on the validation set was
reached. The number of trainable parameters (#params) indicates the
complexity of the model.
Model
Acc.
balanced Acc.
#epochs
#params
VGG11_BN
VGG13_BN
VGG16_BN
VGG19_BN
Figure 6 shows the resulting -fold mean balanced accuracy over SNRs dB in dB
steps. Note that we do not show the standard deviation to keep the plot
readable. In general, we observe a drastic degradation in performance
from dB down to near chance level at dB.
The vast majority of misclassifications occurred between noise and drones and not between different types of drones. Figure 7
illustrates this fact. It shows the confusion matrix for the VGGG11_BN
model for a single validation on the test data for the samples with dB SNR.
Figure 6: Mean balanced accuracy obtained in the -fold cross-validation of the different models on the test set of the development dataset over the SNRs levels.Figure 7: Confusion matrix of the outputs of the VGG11_BN model on a single fold for the samples at dB SNR from the test data. The average balanced accuracy is .
4.2 EMBEDDING SPACE VISUALISATION
Figure 8 shows the 2D t-SNE visualisation of the VGG11_BN embeddings of
test samples from the development dataset. It can be seen that each
class forms a separate cluster. While the different drone signal
clusters are rather small and dense, the noise cluster takes up most of
the embedding space and even forms several sub-clusters. This is most
likely due to the variety of the signals used in the noise class,
i.e. Bluetooth and Wi-Fi signals plus Gaussian noise.
We used t-SNE
for dimensionality reduction because of its ability to preserve local
structure within the high-dimensional embedding space. Furthermore, t-SNE has been widely adopted in the ML
community and has a well-established track record for high-dimensional
data visualisation. However, it is sensitive to hyperparameters such as
perplexity and requires some tuning, i.e. different parameters can lead
to considerable different results.
It can be argued that UMAP
would be a better choice due to its balanced preservation of local and
global structure together with its robustness to hyperparameters.
Therefore, we created a web application55https://visvgg11bndronerfembeddings.streamlit.app that allows users to test and compare both approaches with different hyperparameters.
Figure 8: 2D t-SNE visualisation of the VGG11_BN embeddings of test samples from the development dataset. The hyperparameters for t-SNE were: metric “eucleidean”, number of iterations , perplexity and method for gradient approximation “barnes_hut”
4.3 CLASSIFICATION PERFORMANCE IN THE FIELD TEST
For each model architecture, we performed -fold cross-validation on the development dataset (cf. Sec. 3.1),
resulting in five trained models per architecture. Thus, we also
evaluated all five trained models on the field test data. We report the
balanced accuracy
standard deviation for each model architecture for the complete field
test dataset averaged over all directions and distances in Tab. LABEL:tab:field_test_acc.
Table 7: Mean standard deviation of the balanced accuracy (balanced Acc.) of the complete field test recordings for the different models.
Model
balanced Acc.
VGG11_BN
VGG13_BN
VGG16_BN
VGG19_BN
As observed on the development dataset (cf. Tab. 6),
there is no meaningful difference in performance between the model
architectures. We therefore focus on VGG11_BN, the simplest model
trained, in the more detailed analysis of the field test results.
A live system should trigger an alarm
when a drone is present. Therefore, the question of whether the signal
is from a drone at all is more important than predicting the correct
type of drone. Therefore, we also evaluated the models in terms of a
binary problem with two classes “Drone” (for all six classes of drones
in the development dataset) and “Noise”.
Table LABEL:tab:field_test_acc_type_direction
shows that the accuracies were highly depend on the class. Our models
generalise well to the drones in the dataset, with the exception of the
DJI. The dependence on direction is not as strong as expected. Orienting
the antenna 180∘ away from the transmitter reduces the signal power by about dB, resulting in lower SNR
and lower classification accuracy. However, as the transmitters were
still quite close to the antenna, the effect is not pronounced. As we
have seen on the development dataset in Fig. 6, there is a clear drop in accuracy once the SNR is below dB. Apparently we were still above this threshold, regardless of the direction of the antenna.
Table 8: Mean balanced accuracy standard deviation of the VGG11_BN models on the field test recordings for the different classes for each direction (, and ).
The upper part shows the accuracies for the classification problem
(seven classes) and the lower part the accuracies for the detection
problem “Drone” or “Noise”
Class
DJI
FutabaT14
FutabaT7
Noise
Taranis
Turnigy
Drone
Noise
What may be surprising is the low
accuracy on the signals with no active transmitter, labelled as “Noise”,
in the direction of the lake ().
Given the uncontrolled nature of a field test, it could well be that
there a drone was actually flying on the other side of the km wide lake. This could explain the false positives we observed in that direction.
Table LABEL:tab:field_test_acc_distance
shows the average balanced accuracy of the VGG11_BN models on the field
test data collected at different distances for each antenna direction.
There is a slight decrease in accuracy with distance. However, the
longest distance of m
appears to be too short to be a problem for the system. Unfortunately,
this was the longest distance within line-of-sight that could be
recorded at this location.
Table 9: Mean balanced accuracy
standard deviation of the VGG11_BN models on the field test data with
active transmitters collected at different distances for each antenna
direction (, and ).
The upper part shows the accuracies for the classification problem
(seven classes) and the lower part the accuracies for the detection
problem “Drone” or “Noise”
Classification
Distance (m)
110
340
560
670
Detection
Distance (m)
110
340
560
670
Figure 9
shows the confusion matrix for the outputs of the VGG11_BN model of a
single fold on the field test data. As with the development dataset
(cf. Fig. 7), most of the confusion is between noise and drones rather than between different types of drones.
Figure 9: Confusion
matrix of the outputs of the VGG11_BN model on a single fold for the
samples from the field test data. The average balanced accuracy is .
5 DISCUSSION
We were able to show that a standard CNN, trained on drone RF
signals recorded in a controlled laboratory environment and
artificially augmented with noise, generalised well to the more
challenging conditions of a real-world field test.
The drone detection system consisted of rather simple and low budget hardware (consumer grade notebook with GPU + SDR).
Recording parameters such as sampling frequency, length of input
vectors, etc. were set to enable real-time detection with the limited
amount of memory and computing power. This means that data acquisition,
pre-processing and model inference did not take longer than the signal
being processed ( ms per sample in our case).
Obviously, the VGG models were able to learn the relevant features for the drone classification from the complex spectrograms of the RF
signal. In this respect, we did not find any advantage for the use of
more complex models, such as VGG19_BN, over the least complex model,
VGG11_BN (cf. Tabs. 6 and LABEL:tab:field_test_acc).
Furthermore, we have seen that the
misclassifications mainly occur between the noise class and the drones,
and not between the different drones themselves (cf. Figs. 7 and 9).
This is particularly relevant for the application of drone detection
systems in security sensitive areas. The first priority is to detect any
kind of UAV, regardless of its type.
Based on our experience and results, we
see the following limitations of our work. The field test showed that
the models can be used and work reliably (cf. Tab. LABEL:tab:field_test_acc_type_direction).
However, it is the nature of a field test that the level of
interference from WiFi/Bluetooth noise and the possible presence of
other drones
cannot be fully controlled. Furthermore, due to the limited
space/distance between the transmitter and receiver in our field test
setup, we were not able to clearly demonstrate the effect of free space
attenuation on detection performance (cf. Tab. LABEL:tab:field_test_acc_distance).
Regarding the use of simple CNNs
as classifiers, it is not possible to reliably predict whether multiple
transmitters are present. In that case, an object detection approach on
the spectrogams could provide a more fine-grained prediction, see for
example the works [30, 31] and [21]. Nevertheless, the current approach will still detect a drone if one or more are present.
We have only tested a limited set of VGG architectures. It remains to be seen whether more recent architectures, such as the pre-trained Vision Transformer [32],
generalise as well or better. We hope that our development dataset will
inspire others to further optimise the model side of the problem and
perhaps find a model architecture with better performance.
Another issue to consider is the
occurrence of unknown drones, i.e. drones that are not part of the train
set. Examining the embedding space (cf. 4.2)
gives a first idea of whether a signal is clearly part of a known dense
drone cluster or rather falls into the larger, less dense, noise
cluster. We believe that a combination of an unsupervised deep
autoencoder approach [33, 34] with an additional classification part (cf. [35])
would allow, first, to provide a stable classification of known samples
and, second, to indicate whether a sample is known or rather an
anomaly.
References
[1] N. Al-lQubaydhi, A. Alenezi, T. Alanazi, A. Senyor, N. Alanezi,
B. Alotaibi, M. Alotaibi, A. Razaque, and S. Hariri, “Deep learning for
unmanned aerial vehicles detection: A review,” Computer Science Review, vol. 51, p. 100614, 2 2024. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S1574013723000813
[2] M. H. Rahman, M. A. S. Sejan, M. A. Aziz, R. Tabassum, J.-I. Baik, and
H.-K. Song, “A comprehensive survey of unmanned aerial vehicles
detection and classification using machine learning approach:
Challenges, solutions, and future directions,” Remote Sensing, vol. 16, p. 879, 3 2024. [Online]. Available: https://www.mdpi.com/2072-4292/16/5/879
[3] M. S. Allahham, M. F. Al-Sa’d, A. Al-Ali, A. Mohamed, T. Khattab, and
A. Erbad, “Dronerf dataset: A dataset of drones for rf-based detection,
classification and identification,” Data in Brief, vol. 26, p. 104313, 10 2019. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S2352340919306675
[4] M. F. Al-Sa’d, A. Al-Ali, A. Mohamed, T. Khattab, and A. Erbad,
“Rf-based drone detection and identification using deep learning
approaches: An initiative towards a large open source drone database,” Future Generation Computer Systems, vol. 100, pp. 86–97, 11 2019.
[5] C. J. Swinney and J. C. Woods, “Unmanned aerial vehicle flight mode
classification using convolutional neural network and transfer
learning,” in 2020 16th International Computer Engineering Conference (ICENCO), 2020, pp. 83–87.
[6] Y. Zhang, “Rf-based drone detection using machine learning,” in 2021 2nd International Conference on Computing and Data Science (CDS), 2021, pp. 425–428.
[7] C. Ge, S. Yang, W. Sun, Y. Luo, and C. Luo, “For rf signal-based uav
states recognition, is pre-processing still important at the era of deep
learning?” in 2021 7th International Conference on Computer and Communications (ICCC), 2021, pp. 2292–2296.
[8] Y. Xue, Y. Chang, Y. Zhang, J. Sun, Z. Ji, H. Li, Y. Peng, and J. Zuo,
“Uav signal recognition of heterogeneous integrated knn based on genetic
algorithm,” Telecommunication Systems, vol. 85, pp. 591–599, 4 2024. [Online]. Available: https://link.springer.com/10.1007/s11235-023-01099-x
[9] A. AlKhonaini, T. Sheltami, A. Mahmoud, and M. Imam, “Uav detection using reinforcement learning,” Sensors, vol. 24, no. 6, 2024. [Online]. Available: https://www.mdpi.com/1424-8220/24/6/1870
[10] M. Ezuma, F. Erden, C. K. Anjinappa, O. Ozdemir, and I. Guvenc,
“Detection and classification of uavs using rf fingerprints in the
presence of wi-fi and bluetooth interference,” IEEE Open Journal of the Communications Society, vol. 1, pp. 60–76, 2020. [Online]. Available: https://ieeexplore.ieee.org/document/8913640/
[12] E. Ozturk, F. Erden, and I. Guvenc, “Rf-based low-snr classification of uavs using convolutional neural networks,” ITU Journal on Future and Evolving Technologies, vol. 2, pp. 39–52, 7 2021. [Online]. Available: https://www.itu.int/pub/S-JNL-VOL2.ISSUE5-2021-A04
[13] C. J. Swinney and J. C. Woods, “Dronedetect dataset: A radio frequency
dataset of unmanned aerial system (uas) signals for machine learning
detection & classification,” 2021. [Online]. Available: https://dx.doi.org/10.21227/5jjj-1m32
[14] ——, “Rf detection and classification of unmanned aerial vehicles in environments with wireless interference,” in 2021 International Conference on Unmanned Aircraft Systems (ICUAS), 2021, pp. 1494–1498.
[15] S. Kunze and B. Saha, “Drone classification with a convolutional neural network applied to raw iq data,” in 2022 3rd URSI Atlantic and Asia Pacific Radio Science Meeting (AT-AP-RASC), May 2022, pp. 1–4. [Online]. Available: https://ieeexplore.ieee.org/document/9814170/
[16] O. Medaiyese, M. Ezuma, A. Lauf, and A. Adeniran, “Cardinal rf (cardrf):
An outdoor uav/uas/drone rf signals with bluetooth and wifi signals
dataset,” 2022. [Online]. Available: https://dx.doi.org/10.21227/1xp7-ge95
[17] O. O. Medaiyese, M. Ezuma, A. P. Lauf, and A. A. Adeniran, “Hierarchical
learning framework for uav detection and identification,” IEEE Journal of Radio Frequency Identification, vol. 6, pp. 176–188, 2022.
[18] O. O. Medaiyese, M. Ezuma, A. P. Lauf, and I. Guvenc, “Wavelet transform
analytics for rf-based uav detection and identification system using
machine learning,” Pervasive and Mobile Computing, vol. 82, p. 101569, 6 2022. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S1574119222000219
[19] F. N. Iandola, M. W. Moskewicz, K. Ashraf, S. Han, W. J. Dally, and
K. Keutzer, “Squeezenet: Alexnet-level accuracy with 50x fewer
parameters and <1mb model size,” CoRR, vol. abs/1602.07360, 2016. [Online]. Available: http://arxiv.org/abs/1602.07360
[20] S. Glüge., M. Nyfeler., N. Ramagnano., C. Horn., and C. Schüpbach.,
“Robust drone detection and classification from radio frequency signals
using convolutional neural networks,” in Proceedings of the 15th International Joint Conference on Computational Intelligence - NCTA, INSTICC. SciTePress, 2023, pp. 496–504.
[21] R. Zhao, T. Li, Y. Li, Y. Ruan, and R. Zhang, “Anchor-free multi-uav detection and classification using spectrogram,” IEEE Internet of Things Journal, vol. 11, pp. 5259–5272, 2 2024. [Online]. Available: https://ieeexplore.ieee.org/document/10221859/
[22] C. Sun, A. Shrivastava, S. Singh, and A. Gupta, “Revisiting unreasonable effectiveness of data in deep learning era,” in 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 843–852.
[23] P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy,
D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, S. J.
van der Walt, M. Brett, J. Wilson, K. J. Millman, N. Mayorov, A. R. J.
Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey, İ. Polat, Y. Feng,
E. W. Moore, J. VanderPlas, D. Laxalde, J. Perktold, R. Cimrman,
I. Henriksen, E. A. Quintero, C. R. Harris, A. M. Archibald, A. H.
Ribeiro, F. Pedregosa, P. van Mulbregt, and SciPy 1.0 Contributors,
“SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python,” Nature Methods, vol. 17, pp. 261–272, 2020.
[24] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in 3rd
International Conference on Learning Representations, ICLR 2015, San
Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Y. Bengio and Y. LeCun, Eds., 2015. [Online]. Available: http://arxiv.org/abs/1409.1556
[25] S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” in Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ser. ICML’15. JMLR.org, 2015, p. 448–456.
[26] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan,
T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf,
E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner,
L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style,
high-performance deep learning library,” in Advances in Neural Information Processing Systems 32,
H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and
R. Garnett, Eds. Curran Associates, Inc., 2019, pp. 8024–8035.
[27] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in 3rd
International Conference on Learning Representations, ICLR 2015, San
Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Y. Bengio and Y. LeCun, Eds., 2015. [Online]. Available: http://arxiv.org/abs/1412.6980
[28] L. van der Maaten and G. Hinton, “Visualizing data using t-sne,” Journal of Machine Learning Research, vol. 9, no. 86, pp. 2579–2605, 2008. [Online]. Available: http://jmlr.org/papers/v9/vandermaaten08a.html
[29] L. McInnes, J. Healy, N. Saul, and L. Großberger, “Umap: Uniform manifold approximation and projection,” Journal of Open Source Software, vol. 3, no. 29, p. 861, 2018. [Online]. Available: https://doi.org/10.21105/joss.00861
[30] K. N. R. Surya Vara Prasad and V. K. Bhargava, “A classification algorithm for blind uav detection in wideband rf systems,” in 2020 IEEE 92nd Vehicular Technology Conference (VTC2020-Fall), 2020, pp. 1–7.
[31] S. Basak, S. Rajendran, S. Pollin, and B. Scheers, “Combined rf-based drone detection and classification,” IEEE Transactions on Cognitive Communications and Networking, vol. 8, no. 1, pp. 111–120, 2022.
[32] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai,
T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly,
J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words:
Transformers for image recognition at scale,” ICLR, 2021.
[33] S. Lu and R. Li, DAC–Deep Autoencoder-Based Clustering: A General Deep Learning Framework of Representation Learning. Springer Science and Business Media Deutschland GmbH, 2 2022, vol. 294, pp. 205–216. [Online]. Available: https://link.springer.com/10.1007/978-3-030-82193-7_13
[34] H. Zhou, J. Bai, Y. Wang, J. Ren, X. Yang, and L. Jiao, “Deep radio
signal clustering with interpretability analysis based on saliency map,”
Digital Communications and Networks, 1 2023. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S2352864823000238
[35] E. Pintelas, I. E. Livieris, and P. E. Pintelas, “A convolutional
autoencoder topology for classification in high-dimensional noisy image
datasets,” Sensors, vol. 21, p. 7731, 11 2021. [Online]. Available: https://www.mdpi.com/1424-8220/21/22/7731