Thursday, October 9, 2025

When Satellites Borrow Tricks from the Cloud

How data center traffic management could solve the growing congestion crisis in space

By the time you finish reading this sentence, dozens of satellites will have crossed overhead. They're part of a new generation of "mega-constellations"—fleets of thousands of satellites orbiting Earth, beaming internet connectivity to even the most remote corners of the planet. But there's a problem: these satellites are starting to get in each other's way.

Not physically—space is still plenty big. The congestion happens in the invisible highways of radio spectrum that carry data between Earth and orbit. Think of it like rush hour on a freeway, except the cars are traveling at the speed of light and the traffic jam is happening 300 miles above your head.

For decades, satellite networks have relied on the same traffic management systems that run the regular internet. But those systems were designed for fiber-optic cables on Earth, not for signals bouncing between moving satellites and ground stations scattered across continents. The result? Sluggish connections, unfair distribution of bandwidth, and wasted spectrum—a particularly precious resource when you're beaming data through space.

Now, engineers are looking to an unlikely source for solutions: the massive data centers that power everything from Netflix to ChatGPT. These facilities have spent years perfecting ways to move enormous amounts of information without creating digital traffic jams. And it turns out, their strategies might be just what satellite networks need.

The Satellite Traffic Problem

To understand why satellites struggle with congestion, you need to know a bit about how the internet normally handles it.

When data travels across the internet—whether it's an email, a video call, or this article—it's broken into tiny chunks called packets. These packets hop from router to router until they reach their destination. The system that governs this flow is called TCP, or Transmission Control Protocol, and it's been the internet's traffic cop since the 1970s.

TCP works like a cautious driver. It starts slowly, gradually accelerating as long as packets arrive successfully at their destination. But the moment a packet gets lost—stuck in a digital traffic jam somewhere—TCP slams on the brakes. It assumes the network is congested and dramatically slows down, waiting for conditions to improve.

This works reasonably well when your data only travels a few milliseconds across fiber-optic cables. But satellites introduce a cruel twist: distance.

A signal traveling to a satellite in low Earth orbit and back takes about 20 to 40 milliseconds—fast enough for a smooth video call. But satellites in higher orbits, particularly those sitting in a fixed position above the equator at 22,000 miles up, introduce delays of 600 milliseconds or more. That's more than half a second for a round trip.

"With those long delays, TCP's feedback loop becomes almost comically slow," explains Saravanan Subramanian, a network engineer who has studied the problem. "By the time a satellite link realizes there's congestion and reacts, conditions may have already changed completely."

Making matters worse, satellite networks don't stay still. Low-orbit satellites zip around the planet every 90 minutes, which means your connection has to jump from satellite to satellite as they pass overhead. Each handoff can confuse TCP, causing it to misinterpret normal hiccups as serious congestion and unnecessarily throttle your connection.

Then there's the fairness problem. Ground stations with bigger antennas and more powerful transmitters naturally get better connections. In a congested network, they can end up hogging most of the available spectrum, leaving scraps for everyone else. TCP, designed for networks where everyone plays by the same rules, has no good way to level the playing field.

What Data Centers Figured Out

While satellite engineers wrestled with these challenges, data center operators faced their own version of the same problem—just on a much smaller scale.

Inside facilities that house thousands of servers powering cloud computing and artificial intelligence, information flies between machines at breathtaking speeds. These systems use a technology called RDMA (Remote Direct Memory Access), which lets computers grab data from each other's memory directly, bypassing the usual software middlemen. It's incredibly fast—we're talking microseconds—but also incredibly sensitive to even tiny delays.

Data center engineers discovered that TCP's wait-for-failure approach was too slow. They needed to catch congestion before packets got lost, not after.

Their solution: Explicit Congestion Notification, or ECN for short.

Instead of waiting for packets to disappear into the digital void, ECN works like an early warning system. Network switches monitor their internal queues—the digital waiting rooms where packets line up before being forwarded. When a queue starts filling up but before it overflows, the switch marks outgoing packets with a special flag that essentially says, "Hey, I'm getting busy here."

When these marked packets reach their destination, the receiving computer sends the message back to the sender: "The network is getting congested—slow down a bit." The sender reduces its transmission rate, easing the pressure before any packets get lost.

Building on ECN, data center engineers developed an even more sophisticated system called DCQCN (Data Center Quantized Congestion Notification). Instead of just binary "congested" or "not congested" signals, DCQCN provides nuanced feedback. It's the difference between a traffic light that's only red or green versus one that gives you advance warning when it's about to change.

When a sender receives a congestion signal, it quickly cuts its rate in half—like easing off the gas pedal. Then, as conditions improve, it gradually accelerates again. Multiple senders on the same network all follow the same dance, quickly finding a rhythm where everyone gets a fair share without overwhelming the system.

This approach has become standard in the massive data centers running modern AI systems, where thousands of graphics processors need to exchange information constantly without stepping on each other's digital toes.

Bringing the Data Center to Space

The parallels between data center congestion and satellite spectrum congestion are striking. Both involve multiple senders competing for shared resources. Both suffer when traditional TCP-style congestion control arrives too late. And both need fairness—ensuring that no single user monopolizes the network.

So how would ECN and DCQCN work for satellites?

Imagine a satellite receiving signals from dozens of ground stations simultaneously, all trying to upload data through the same frequency band. Inside the satellite's electronics, arriving packets queue up waiting to be processed and forwarded—just like in a data center switch.

Under an ECN-based system, when that queue starts filling beyond a certain threshold—say, 30% full—the satellite begins marking packets before sending them on their way. These marks travel through the network and eventually reach the ground stations in the form of acknowledgment messages.

Each ground station, upon receiving these marked acknowledgments, reduces its transmission rate. Crucially, it does so before any packets are actually lost. The satellite's queue stabilizes, packets keep flowing, and spectrum that would have been wasted on retransmissions remains available for useful data.

For downloads from satellites to users on the ground, the system can work in reverse. Satellites could broadcast simple congestion status updates: "I'm lightly loaded—feel free to request more data," or "I'm under heavy load—please back off." User terminals adjust their requests accordingly, preventing any single user from overwhelming the satellite's downlink capacity.

The approach becomes even more powerful when satellites act as intermediaries, with dedicated gateway stations handling the connection between terrestrial networks and the space segment. These gateways could combine ECN-based congestion signaling with traditional quality-of-service techniques, ensuring that urgent traffic (like voice calls) gets priority over bulk data transfers while maintaining fairness across all users.

The Path Forward

Adapting data center techniques to satellite networks isn't as simple as flipping a switch. Satellites use specialized communication protocols, and their software runs on radiation-hardened processors that weren't designed with modern congestion control in mind. Ground stations and user terminals—including millions of consumer satellite dishes—would need firmware updates to understand and respond to congestion signals.

The parameters also need careful tuning. A data center might mark packets when queues are 50 microseconds deep; a satellite might need to set thresholds at tens or hundreds of milliseconds, accounting for the much longer round-trip times involved.

Different orbital altitudes require different approaches. Low-orbit satellites zip around so fast that they need responsive, quick-reacting congestion control. High-orbit satellites, with their leisurely half-second round trips, need gentler algorithms that don't overreact to delayed feedback.

Despite these challenges, the potential benefits are substantial. Computer simulations suggest that ECN-based satellite networks could achieve 40-60% reductions in wasted spectrum from retransmissions compared to traditional TCP. Fairness improves dramatically, with even modest ground stations able to maintain reasonable connections rather than being drowned out by more powerful neighbors. And crucially, the approach scales—it works just as well with ten ground stations as with a thousand.

Some of the necessary groundwork is already happening. International standards bodies are updating satellite communication protocols to support modern congestion control features. The next generation of satellite terminals being designed today could include ECN capability from the start rather than requiring retrofitting.

A Cosmic Irony

There's something delightfully ironic about solving space-age problems with techniques developed for earthbound data centers. But it speaks to a deeper truth: whether information is flowing between servers in a climate-controlled warehouse or between satellites hurtling through the vacuum of space, the fundamental challenges are surprisingly similar.

As mega-constellations continue to grow—with some companies planning networks of tens of thousands of satellites—the need for smarter spectrum management will only intensify. The approach that worked fine with a few dozen satellites sharing the sky won't suffice when hundreds occupy the same frequency bands.

By borrowing proven strategies from data centers and adapting them to the unique demands of satellite communications, engineers can help ensure that our growing orbital infrastructure delivers on its promise: fast, reliable internet access for everyone on Earth, regardless of where they live.

The traffic jam in space is real. But just as rush hour eventually ends on earthly highways, clever engineering can help information flow freely along our cosmic ones.


Further Reading

  • Subramanian, S. R. "Spectrum Congestion Control: ECN/DCQCN Insights." The Data Scientist, 2025. https://thedatascientist.com/spectrum-congestion-control-ecn-dcqcn-insights

  • Zhu, Y., et al. "Congestion Control for Large-Scale RDMA Deployments." ACM SIGCOMM, 2015.

  • 3GPP Technical Report 38.821: "Solutions for NR to support non-terrestrial networks," 2021.

  • Cardwell, N., et al. "BBR: Congestion-Based Congestion Control." Communications of the ACM, 2017.

spectrum congestion control: ECN/DCQCN Insights

Explicit Congestion Signaling for Spectrum Management in Non-Terrestrial Networks: A Survey of ECN/DCQCN Adaptation Strategies

Abstract

Non-terrestrial networks (NTNs), particularly low Earth orbit (LEO) satellite constellations, face increasing spectrum congestion as deployment scales accelerate. Traditional Transmission Control Protocol (TCP) congestion control mechanisms demonstrate poor performance in high-latency, dynamic-topology space networks due to their reliance on loss-based feedback. This paper examines the adaptation of data center congestion control techniques—specifically Explicit Congestion Notification (ECN) and Data Center Quantized Congestion Notification (DCQCN)—to satellite spectrum management. We analyze the fundamental limitations of loss-based algorithms in space-air-ground integrated networks, present architectural frameworks for explicit signaling in orbital segments, and discuss implementation considerations for uplink/downlink control planes. Our survey indicates that proactive congestion signaling offers significant improvements in fairness, spectrum efficiency, and throughput stability compared to conventional TCP variants across LEO, MEO, and GEO orbital regimes.

I. Introduction

The proliferation of mega-constellations has fundamentally altered the landscape of satellite communications. As of 2024, multiple operators have deployed thousands of satellites in LEO, with tens of thousands more planned for launch in the coming years. This rapid expansion creates unprecedented demand for limited spectrum resources, particularly in Ku-band and Ka-band frequencies allocated for satellite services.

Traditional terrestrial congestion control algorithms were designed for stable, low-latency networks with relatively predictable path characteristics. Satellite networks, by contrast, exhibit round-trip times (RTTs) ranging from approximately 20-40 ms for LEO constellations to 600 ms for geostationary (GEO) satellites, combined with frequent handovers, variable link quality due to atmospheric conditions, and dynamic topology changes as satellites move relative to ground stations and each other.

Recent work in data center networking has demonstrated the effectiveness of explicit congestion signaling for managing high-throughput, latency-sensitive traffic in Remote Direct Memory Access over Converged Ethernet version 2 (RoCEv2) environments. The success of ECN and DCQCN in these contexts suggests potential applicability to spectrum congestion management in NTNs, where similar requirements for predictable latency and fair resource allocation exist.

II. Background and Related Work

A. Congestion Control in Terrestrial Networks

Loss-based congestion control algorithms, including TCP Reno, CUBIC, and their variants, infer network congestion from packet loss events. These algorithms implement Additive Increase Multiplicative Decrease (AIMD) strategies that increase the congestion window during periods of successful transmission and reduce it sharply upon detecting loss.

The fundamental limitation of loss-based approaches in satellite networks stems from the "square-root effect," where throughput is inversely proportional to the square root of the loss probability and directly proportional to the square root of RTT. With RTTs exceeding 500 ms for GEO satellites and loss rates elevated due to wireless channel characteristics, achievable throughput becomes severely constrained.

Delay-based algorithms such as BBR (Bottleneck Bandwidth and RTT) attempt to estimate available capacity through active probing rather than waiting for loss signals. However, in dynamic satellite topologies with frequent inter-satellite link (ISL) reconfigurations and beam handovers, delay measurements become unreliable indicators of congestion state.

B. RoCEv2 and DCQCN Architecture

RoCEv2 supports RDMA operations over standard Ethernet and IP infrastructure, enabling zero-copy data transfers with minimal CPU involvement. To maintain the microsecond-level latencies required for high-performance computing applications, RoCEv2 networks employ two complementary mechanisms:

  1. Priority Flow Control (PFC): A link-layer mechanism that issues PAUSE frames to temporarily halt transmission on specific priority queues when buffer occupancy exceeds thresholds. While effective at preventing packet loss, excessive PFC activation can trigger head-of-line blocking and congestion spreading.

  2. Explicit Congestion Notification (ECN): An IP-layer signaling mechanism defined in RFC 3168 where network switches mark the ECN field in packet headers when queue depths exceed configured thresholds. Endpoints receiving ECN-marked acknowledgments reduce their transmission rates proactively.

DCQCN extends ECN with a quantized rate-adjustment algorithm specifically tuned for RoCEv2 environments. Upon receiving congestion notification, senders reduce their rate by a multiplicative factor α (typically 0.5), then gradually recover using an additive increase governed by timer-based rate adjustments. This approach achieves rapid convergence to fairness across competing flows while maintaining high link utilization.

C. Satellite Network Characteristics

Modern NTN architectures comprise three segments:

  • Space segment: Constellation of satellites with ISLs forming a dynamic mesh topology
  • Ground segment: Gateway stations, telemetry/tracking/command (TT&C) facilities, and network operations centers
  • User segment: End-user terminals with phased-array antennas for beam tracking

LEO constellations operate at altitudes between 500-2,000 km, providing single-hop latencies of 20-40 ms but requiring handovers every 2-10 minutes as satellites move across the user terminal's field of view. MEO systems at 8,000-20,000 km altitude offer more stable connectivity with RTTs of 80-150 ms. GEO satellites at 35,786 km provide persistent coverage but introduce RTTs approaching 600 ms.

The limited electromagnetic spectrum allocated for satellite services—including C-band (4-8 GHz), Ku-band (12-18 GHz), and Ka-band (26.5-40 GHz)—must be shared among multiple operators and thousands of simultaneous user connections. Frequency reuse through spot beams and polarization diversity improves spectral efficiency but introduces interference management challenges.

III. Limitations of TCP in Satellite Spectrum

A. Feedback Loop Disruption

TCP's self-clocking behavior relies on the arrival of acknowledgments to trigger transmission of new data. With RTTs exceeding 500 ms for GEO links, the bandwidth-delay product becomes substantial—potentially hundreds of megabits of data in flight. Standard TCP window scaling becomes insufficient, and any loss event triggers prolonged recovery periods during which spectrum remains underutilized.

LEO and MEO constellations introduce additional complexity through frequent handovers. When a user terminal transitions from one satellite to another, path characteristics change abruptly. Out-of-order packet delivery—a natural consequence of asymmetric routing through ISLs—is often misinterpreted by TCP as loss, triggering unnecessary retransmissions and window reductions.

B. Loss Misattribution

Satellite links experience packet loss from multiple sources beyond congestion:

  • Atmospheric attenuation during rain fade events
  • Doppler shift effects requiring tracking and compensation
  • Antenna pointing errors during handover transitions
  • Interference from adjacent satellites or terrestrial systems

Loss-based congestion control cannot distinguish between these physical-layer impairments and actual buffer overflow events. Consequently, TCP reacts to all losses identically, reducing throughput even when additional spectrum capacity remains available.

C. Fairness Degradation

Ground stations with superior antenna systems, higher transmission power, or more favorable geographic positions naturally achieve higher signal-to-noise ratios (SNR). In contention-based access schemes, these stations capture a disproportionate share of available spectrum. Loss-based TCP exacerbates this imbalance, as stations experiencing better channel conditions maintain larger congestion windows and sustain higher throughput.

The Jain fairness index for TCP flows over satellite links often falls below 0.7, indicating significant inequity in resource allocation. This creates particular challenges for operators attempting to provide consistent service level agreements (SLAs) across diverse user populations.

IV. ECN/DCQCN Adaptation for Satellite Spectrum

A. Architectural Framework

Implementing explicit congestion signaling in NTNs requires modifications at multiple layers:

Network Layer: Satellite payload processors must monitor queue depths at both uplink and downlink interfaces and mark ECN-capable transport (ECT) packets when instantaneous or time-averaged queue occupancy exceeds configured thresholds. For constellations with ISLs, intermediate satellites in multi-hop paths contribute additional marks as congestion propagates through the space segment.

Transport Layer: Ground stations and user terminals implement rate-control algorithms responsive to ECN feedback. Rather than TCP's binary loss signal, endpoints receive continuous congestion information enabling proportional rate adjustments.

MAC Layer: Dynamic modulation and coding schemes (ModCod) adapt to link quality independently of congestion control. Separating physical-layer adaptation from network-layer rate control prevents conflation of channel impairments with buffer congestion.

B. Uplink Congestion Control

Multiple ground stations sharing the same uplink frequency band create the classic many-to-one incast problem familiar from data center networks. Without coordination, simultaneous transmissions cause collisions and force retransmissions that waste spectrum.

An ECN-based uplink control mechanism operates as follows:

  1. Satellite receivers monitor buffer occupancy for each spot beam
  2. When queue depth exceeds marking threshold K_min, subsequent packets in the ECT codepoint are marked with Congestion Experienced (CE)
  3. Marked packets are forwarded to destinations with ECN indication preserved
  4. Returning acknowledgments or control messages carry CE information to ground stations
  5. Upon receiving ECN feedback, ground stations reduce transmission rate using AIMD with parameters tuned for satellite RTT

For LEO constellations with 30 ms RTT, appropriate parameters might include:

  • Multiplicative decrease factor α = 0.5
  • Additive increase rate β = 10 Mbps per RTT
  • Minimum marking threshold K_min = 50 packets
  • Maximum marking threshold K_max = 200 packets

These values balance responsiveness against stability, allowing rapid reaction to congestion onset while avoiding excessive rate oscillation.

C. Downlink Congestion Control

Satellite downlink transmissions to multiple user terminals create a one-to-many distribution scenario. Rather than individual per-flow marking, satellites can implement quantized feedback broadcast to all terminals within a spot beam:

  • Level 0 (Green): Queue occupancy < 30% → terminals may increase rates gradually
  • Level 1 (Yellow): Queue occupancy 30-60% → terminals maintain current rates
  • Level 2 (Orange): Queue occupancy 60-80% → terminals reduce rates by 25%
  • Level 3 (Red): Queue occupancy > 80% → terminals reduce rates by 50%

Quantized feedback reduces control-plane overhead compared to per-packet ECN marking while providing sufficient granularity for effective rate control. Terminals implement quantized AIMD adjustments synchronized to broadcast feedback intervals, typically 1-10 RTTs depending on constellation geometry.

D. Hybrid Gateway QoS Integration

Satellite gateways serving as aggregation points between terrestrial networks and space segments benefit from combining ECN-based rate control with traditional QoS mechanisms:

Weighted Fair Queuing (WFQ): Allocates bandwidth proportionally among traffic classes, preventing any single flow or user from monopolizing capacity

Deficit Round Robin (DRR): Provides fairness with lower computational complexity than WFQ, suitable for high-throughput gateway processors

Class-Based Queuing (CBQ): Separates latency-sensitive traffic (e.g., VoIP, video conferencing) from bulk data transfers, applying ECN marking thresholds appropriate to each class

This hybrid approach maintains fairness at multiple timescales: ECN provides rapid, sub-RTT feedback for instantaneous congestion, while WFQ/DRR enforce longer-term resource allocations aligned with service agreements.

V. Performance Analysis

A. Fairness Improvements

Simulation studies and testbed experiments with ECN-enabled satellite links demonstrate Jain fairness indices exceeding 0.9 across heterogeneous ground station populations. By decoupling rate control from physical-layer signal quality, explicit congestion signaling prevents capture effects that plague loss-based TCP.

The quantized AIMD algorithm used in DCQCN achieves faster convergence to fair rate allocation compared to delay-based schemes. When N flows compete for shared satellite spectrum, DCQCN-based control reaches equilibrium within O(log N) RTTs, whereas delay-based algorithms require O(N) RTTs.

B. Spectrum Efficiency

Proactive congestion signaling reduces retransmission overhead by 40-60% compared to loss-based TCP in satellite environments with 1% base loss rate. By reacting to queue buildup before buffer overflow occurs, ECN-based systems maintain higher effective throughput and reduce wasted spectrum from redundant transmissions.

For LEO constellations with frequent handovers, ECN marking persists across satellite transitions, providing continuity in congestion feedback that loss-based mechanisms cannot match. This results in 25-35% throughput improvements during handover intervals compared to standard TCP CUBIC.

C. Latency Reduction

ECN marking at lower queue thresholds reduces queuing delay compared to systems that wait for loss events. For interactive applications requiring bounded latency—such as remote sensing, command-and-control systems, or real-time telemetry—maintaining queue depths below 100 ms becomes feasible with explicit signaling.

Data center experience shows that DCQCN maintains 99th percentile latencies below 500 µs for RoCEv2 traffic. While absolute latencies in satellite networks remain higher due to propagation delay, the relative reduction in queuing delay provides similar benefits for latency-sensitive applications.

VI. Implementation Considerations

A. Protocol Modifications

Existing satellite terminals and modems require firmware updates to support ECN-capable transport protocols. For systems using UDP-based proprietary protocols, ECN functionality can be implemented through:

  • Custom congestion notification fields in application-layer headers
  • Out-of-band control channels for feedback signaling
  • Integration with existing satellite-specific protocols (e.g., DVB-S2, DVB-RCS2)

IETF working groups have proposed extensions to QUIC and HTTP/3 for improved satellite performance, including ECN support and congestion control algorithms tuned for high-latency environments.

B. Parameter Tuning

Optimal ECN marking thresholds and AIMD parameters depend on constellation characteristics:

LEO (30-40 ms RTT):

  • K_min = 50-100 packets
  • α = 0.5 (50% rate reduction)
  • β = 5-10 Mbps per RTT additive increase

MEO (80-150 ms RTT):

  • K_min = 150-300 packets
  • α = 0.5
  • β = 2-5 Mbps per RTT

GEO (500-600 ms RTT):

  • K_min = 500-1000 packets
  • α = 0.4 (gentler reduction for slower feedback)
  • β = 1-2 Mbps per RTT

These parameters require validation through simulation and field trials across diverse traffic patterns and network conditions.

C. Interoperability and Deployment

Incremental deployment presents challenges in networks with mixed ECN-capable and legacy terminals. Satellite operators can adopt several strategies:

  1. Per-beam enablement: Activate ECN on spot beams with high concentrations of updated terminals
  2. Fallback mechanisms: Maintain PFC or legacy TCP for non-ECN traffic
  3. Gateway-assisted marking: Implement ECN at gateways for terrestrial-satellite boundary

Standards development through 3GPP (5G NTN specifications), IETF (QUIC satellite profile), and ITU-R (spectrum management recommendations) will facilitate interoperability across multi-vendor deployments.

VII. Future Research Directions

Several open questions remain for ECN/DCQCN adaptation in satellite networks:

Machine Learning Integration: Can reinforcement learning algorithms optimize ECN marking thresholds and AIMD parameters dynamically based on observed traffic patterns and constellation state?

Cross-Layer Optimization: How should congestion control coordinate with physical-layer adaptive coding/modulation, beam steering, and power control for system-wide efficiency?

Multi-Constellation Scenarios: When user terminals connect to satellites from multiple operators, how can ECN feedback be federated across administrative boundaries while preserving fairness and preventing gaming?

Application-Specific Tuning: Do different application classes (bulk transfer, streaming media, IoT telemetry) benefit from distinct ECN parameter profiles, and how can these be implemented without excessive complexity?

VIII. Conclusion

The adaptation of data center congestion control techniques to satellite spectrum management represents a promising direction for improving NTN performance as deployment scales increase. Explicit congestion signaling through ECN and DCQCN-inspired algorithms addresses fundamental limitations of loss-based TCP in high-latency, dynamic-topology space networks.

By providing early, continuous feedback on network congestion state, ECN-based systems achieve superior fairness, spectrum efficiency, and latency characteristics compared to conventional approaches. The success of these techniques in data center RoCEv2 environments—where microsecond-level latencies and lossless operation are required—suggests strong potential for satellite applications where millisecond-level latencies and efficient spectrum utilization are paramount.

Practical deployment requires careful parameter tuning, protocol modifications, and coordination between satellite operators, terminal vendors, and standards bodies. Field trials measuring fairness metrics, spectrum efficiency, and application-level performance across diverse orbital regimes will be essential for validating theoretical predictions and refining implementation strategies.

As mega-constellations continue to expand and user demand for satellite connectivity grows, proactive congestion control mechanisms will become increasingly critical for delivering performance approaching terrestrial fiber quality. The transition from loss-based to signal-based congestion management represents a key enabler for next-generation satellite networks serving cloud applications, enterprise connectivity, and ubiquitous global broadband access.

References

[1] S. R. Subramanian, "Spectrum Congestion Control: ECN/DCQCN Insights," The Data Scientist, 2025. [Online]. Available: https://thedatascientist.com/spectrum-congestion-control-ecn-dcqcn-insights

[2] Y. Zhu et al., "Congestion Control for Large-Scale RDMA Deployments," ACM SIGCOMM Computer Communication Review, vol. 45, no. 4, pp. 523-536, 2015.

[3] IEEE 802.1Qau, "Congestion Notification," IEEE Standard, 2010.

[4] K. Ramakrishnan, S. Floyd, and D. Black, "The Addition of Explicit Congestion Notification (ECN) to IP," RFC 3168, IETF, 2001.

[5] 3GPP TR 38.821, "Solutions for NR to support non-terrestrial networks (NTN)," 3rd Generation Partnership Project, 2021.

[6] C. Caini et al., "TCP Hybla: A TCP Enhancement for Heterogeneous Networks," International Journal of Satellite Communications and Networking, vol. 22, no. 5, pp. 547-566, 2004.

[7] N. Cardwell et al., "BBR: Congestion-Based Congestion Control," Communications of the ACM, vol. 60, no. 2, pp. 58-66, 2017.

[8] ITU-R S.1432, "Apportionment of the Allowable Error Performance Degradations to Radio-Relay Systems Arising from Interference," International Telecommunication Union, 2000.


Author Biography: This survey synthesizes recent developments in satellite congestion control based on techniques adapted from data center networking research and operational insights from wide area network engineering practice.

 

No comments:

Post a Comment

When Satellites Borrow Tricks from the Cloud

How data center traffic management could solve the growing congestion crisis in space By the time you finish reading this sentence, dozens...