Monday, June 26, 2023

Scoop: Congress sets limits on staff ChatGPT use

Scoop: Congress sets limits on staff ChatGPT use

The House is placing new guardrails around use of the popular AI chatbot ChatGPT by congressional offices, Axios has learned.

Why it matters: It's the latest example of how Washington is grappling with the implications of the recent explosive growth in generative AI both legislatively and personally.

Driving the news: In a memo to House staffers on Monday morning, a copy of which was obtained by Axios, the chamber's Chief Administrative Officer Catherine L. Szpindor wrote that offices are "only authorized" to use the paid ChatGPT Plus.

  • Unlike the free service, she said, the $20-per-month subscription version "incorporates important privacy features that are necessary to protect House data."
  • She said in addition to other versions of ChatGPT, no other large language models are authorized for use.

I asked Bard to outline federal legislation for regulation of the use of large language models such as ChatGPT and Bard in drafting of legislation

outline federal legislation for regulation of the use of large language models such as ChatGPT and Bard in drafting of legislation

Sure, here is an outline of federal legislation for regulation of the use of large language models such as ChatGPT and Bard in drafting of legislation:

  • Title: The Large Language Models in Legislation Act
  • Section 1. Findings. The Congress finds the following:
    • Large language models (LLMs) are powerful tools that can be used to generate text, translate languages, write different kinds of creative content, and answer questions in an informative way.
    • LLMs have the potential to be used to draft legislation, but there are risks associated with this use, such as the potential for LLMs to generate biased or inaccurate content, or to be used to circumvent existing laws and regulations.
    • It is in the public interest to regulate the use of LLMs in drafting of legislation in order to mitigate these risks and ensure that LLMs are used in a responsible and ethical manner.
  • Section 2. Definitions. In this Act:
    • The term "large language model" means a statistical method that predicts the next word in a sequence of words, based on the words that have already been generated.
    • The term "drafting of legislation" means the process of writing, editing, and revising proposed legislation.
  • Section 3. Prohibitions. It shall be unlawful for any person to use an LLM to draft legislation in a manner that:
    • Is biased or inaccurate;
    • Circumvents existing laws or regulations;
    • Is intended to mislead or deceive the public; or
    • Is otherwise harmful to the public interest.
  • Section 4. Enforcement. The Federal Trade Commission shall have the authority to enforce this Act.
  • Section 5. Severability. If any provision of this Act is held to be invalid, the remainder of this Act shall remain in full force and effect.

This is just an outline, and the specific provisions of the legislation would need to be further developed. However, this outline provides a starting point for considering how to regulate the use of LLMs in drafting of legislation.

Here are some additional considerations that could be included in the legislation:

  • The level of human oversight required for the use of LLMs in drafting legislation.
  • The types of information that LLMs are allowed to access when drafting legislation.
  • The ways in which the output of LLMs can be verified for accuracy and bias.
  • The penalties for violations of the legislation.

The goal of the legislation should be to strike a balance between the potential benefits of using LLMs in drafting legislation and the risks associated with this use. The legislation should also be flexible enough to adapt to the rapidly evolving nature of LLM technology.

Why pick the FTC to enforce this? It seems that a Congressional office similar to the GAO might be better. Also why limit it to just the Legislature. How about limits on Executive departments and Judiciary?

The FTC was chosen to enforce the legislation because it has a long history of regulating the use of technology in the public interest. The FTC has also been involved in regulating the use of artificial intelligence, and it has the expertise and resources to enforce the legislation.

A Congressional office similar to the GAO might also be a good choice to enforce the legislation. The GAO has a strong track record of auditing and evaluating government programs, and it could provide valuable insights into the use of LLMs in drafting legislation.

The legislation could be expanded to include the executive branch and the judiciary. However, there are some challenges to enforcing such legislation in these branches. For example, the executive branch has a lot of discretion in how it implements laws, and it could be difficult to enforce limits on the use of LLMs in the executive branch. The judiciary is also independent, and it could be difficult to enforce limits on the use of LLMs in the judiciary.

Ultimately, the decision of who should enforce the legislation and which branches of government should be included is a policy decision that would need to be made by Congress.

Here are some additional considerations that could be taken into account when making this decision:

  • The expertise and resources of the different agencies.
  • The independence of the different agencies.
  • The need to ensure that the legislation is enforceable.
  • The need to balance the potential benefits and risks of using LLMs in different branches of government.

Citizens might feel that offices with the most power over them need the most regulation. They do not wish to be governed by AI


You are right. Citizens might feel that offices with the most power over them need the most regulation. They do not wish to be governed by AI. This is a legitimate concern, and it is important to consider the potential risks of using AI in these branches of government.

Here are some of the risks that could arise from using AI in the executive branch and the judiciary:

  • Bias: AI systems are trained on large amounts of data, and this data can be biased. If an AI system is trained on data that is biased, it could generate biased output. This could lead to decisions that are unfair or discriminatory.
  • Inaccuracy: AI systems are not perfect, and they can make mistakes. If an AI system is used to make decisions that have a significant impact on people's lives, these mistakes could have serious consequences.
  • Opacity: AI systems are often opaque, meaning that it is difficult to understand how they make decisions. This can make it difficult to hold AI systems accountable for their decisions.

These are just some of the risks that could arise from using AI in the executive branch and the judiciary. It is important to carefully consider these risks before deciding whether or not to use AI in these branches of government.

It is also important to remember that AI is a tool, and like any tool, it can be used for good or evil. It is up to us to ensure that AI is used in a responsible and ethical manner. We need to put in place safeguards to prevent AI from being used to harm people or to violate their rights.

If we do this, AI can be a powerful force for good in the world. It can help us to make better decisions, to solve complex problems, and to improve our lives.

 

Wednesday, June 21, 2023

Detect and Avoid System | General Atomics Aeronautical Systems Inc.

Detect and Avoid System | General Atomics Aeronautical Systems Inc.

ga-asi.com

Detect and Avoid System


Essential Technology to Integrate Unmanned Aircraft Systems (UAS) into the National Airspace System (NAS)

As a leading manufacturer of Remotely Piloted Aircraft (RPA) systems, GA-ASI has been an industry leader in the efforts to advance technology and develop policy to facilitate the safe integration of RPA into National Airspace. One focus of these efforts is the inability for Unmanned Aircraft System (UAS) to meet the "see and avoid" requirements found in 14 CFR 91.113. These regulations describe a pilot-provided function that, for UAS, must be met using an alternative means of compliance, known as Detect and Avoid (DAA).

DAA System Installed on MQ-9B SkyGuardian
DAA System Installed on MQ-9B SkyGuardian

GA-ASI has been at the forefront of DAA technology development and standardization since 2011, when early prototypes of the GA-ASI Air-to-Air Radar (ATAR) were first produced and tested. With the formation of RTCA Special Committee 228 (SC-228) in 2013, GA-ASI began making critical contributions to the standardization of the technology through flight testing of the GA-ASI ATAR and a prototype DAA algorithm, developed by Information Systems Delft (ISD) and contracted to GA-ASI. These prototypes, in conjunction with the results from four NASA/GA-ASI flight test campaigns, served as building blocks for the Minimum Operational Performance Standards (MOPS) for DAA and ATAR that were released in Q2 of 2017: RTCA DO-365 and RTCA DO-366, respectively. Collaboration with Honeywell, ACSS, BAE Systems and multiple flights carried out on NASA's Ikhana were instrumental in achieving an industry-wide solution.

Later in 2017, these MOPS were adopted by the FAA into two Technical Standard Orders (TSOs) as the certification basis for DAA: TSO-C211 for DAA, and TSO-C212 for ATAR systems. Additionally, GA-ASI has actively participated in the standards development for the next generation of DAA Systems: Airborne Collision Avoidance System-Xu (ACAS-Xu), scheduled to be released in late 2020.

Alongside contributions to the standards development for DAA, GA-ASI has sparked several internal programs intent on furthering the integration of UAS in civil airspace. The first is a mature, fielded version of the aforementioned prototype system, known as the Baseline DAA system. With over 3,000 hours deployed onboard Customs and Border Protection's (CBP) MQ-9 aircraft since 2016, the design, support, and sustainment of the Baseline DAA system has established GA-ASI as a leader in the realm of DAA technology.

The Baseline DAA program has also provided insight and comprehensive experience to lay the groundwork for its subsequent project to create the world's first-ever certified DAA system. Efforts to design, build, and certify this DAA system began in 2018 and are expected to reach completion with a TSO-C211 and TSO-C212 authorization in 2022.

The certified system incorporates a TCAS II collision avoidance system, certified hardware and software, and represents the culmination of nearly 10 years of industry-leading DAA technology. The certified system is the key to complying with the "see and avoid" requirements in the U.S., and GA-ASI has a project planned to do the same for European airspace.

Upon release of the ACAS Xu MOPS, GA-ASI plans to develop an ACAS Xu DAA system, a DAA solution that was jointly developed by U.S. and European standards organizations (RTCA and EUROCAE, respectively). ACAS Xu signifies a joint solution to the largest technological barrier to civil airspace access for UAS, and is the key to international harmonization of DAA standards. This solution, along as the other GA-ASI DAA solutions, provide our customers and the industry with an ever-growing opportunity for safe airspace access for UAS.

Due Regard Radar

Due Regard Radar Assembly
Due Regard Radar Assembly

The Due Regard Radar (DRR) is an air-to-air radar being developed under company funding to meet the requirements envisioned to enable Remotely Piloted Aircraft (RPA) to fly in International Airspace. The DRR is comprised of a two panel Active Electronically Scanned Array (AESA) Antenna and a Radar Electronics Assembly (REA) that give the RPA pilot the ability to detect and track aircraft across the same Field-of-View (FOV) as a manned aircraft. AESA technology allows DRR to track multiple targets while simultaneously continuing to scan for new aircraft. Flight tests of a pre-production DRR are underway on both a manned aircraft and a Predator® B RPA.

DRR is a key component of GA-ASI's overall airborne Detect and Avoid (DAA) architecture for Predator B. The DAA system must detect and avoid cooperative (i.e. aircraft equipped with a transponder) and non-cooperative (i.e. aircraft not equipped with a transponder) aircraft. Detection and tracking of cooperative aircraft is performed by the Traffic Collision Avoidance System (TCAS) and Automatic Dependent Surveillance-Broadcast (ADS-B). By tracking non-cooperative aircraft, DRR enables a collision avoidance capability onboard the RPA and allows the pilot to separate the aircraft from other air traffic in cooperation with Air Traffic Control (ATC).

GA-ASI is currently partnered with the FAA, NASA, and several industry partners to mature a DAA capability onboard NASA's Ikhana RPA. The FAA is developing a collision avoidance algorithm specifically for RPA, called ACAS XU, which uses the GA ASI DRR to detect and track non-cooperative aircraft.

Features/Benefits:

  • High-performance, actively electronically scanned array antenna
  • Air-cooled antenna and radar electronics assembly
  • Detects and tracks any aircraft in FOV regardless of equipment onboard
  • Feeds tracks through DAA system to remote pilot
  • Enables due regard operations in international airspace
  • Facilitates full and safe integration of RPA into the U.S. National Airspace System (NAS)

 

Tuesday, June 20, 2023

Satellite-based cloud computing deploying Datacenter on Satellites for Edge and Fog Computing – International Defense Security & Technology

Satellite-based cloud computing deploying Datacenter on Satellites for Edge and Fog Computing – International Defense Security & Technology









idstch.com

Satellite-based cloud computing deploying Datacenter on Satellites for Edge and Fog Computing - International Defense Security & Technology

Rajesh Uppal

Constellations of satellites are being proposed in large numbers; most of them are expected to be in orbit within the next decade. They will provide communication to unserved and underserved communities, enable global monitoring of Earth and enhance space observation.

Constellations have their greatest potential in the communication field. The upcoming era of the Internet-Of-Things requires the communication infrastructure to handle huge amounts of data and to guarantee service in any geographical position. Constellations, however, also have great potential in weather science, safety/security and disaster monitoring.

As the number of satellites continues to increase, satellites become an important part of the IoT and 5G/6G communications. How to deal with the data of the satellite Internet of Things is a problem worth considering and paying attention to. Several space startups are integrating micro-data centers into their designs, offering computing power to process satellite imaging data or monitor distributed sensors for Internet of Things (IoT) applications.

Edge computing requirements

The number of remote sensing satellites has increased dramatically in satellite launches in recent years. After acquiring the remote sensing image data of the satellite, the researchers used the artificial intelligence algorithm and the powerful computing power of the ground data center to extract the hidden information in the remote sensing image. However, researchers need to spend a lot of time and cost to complete this process. In the existing satellite communications, most of the observation, relay, and communication satellites are single-star and single-chain, and there is no network. Due to the limitation of energy consumption, the available processors on the satellites have poor performance and cannot meet the growing demand for space computing tasks.

At the same time, the satellite communication rate between satellites and other satellites, and between satellites and the ground, is generally not improved. The amount of data generated by the on-board sensor is large, causing a high delay in the data transmission process, which is very disadvantageous for scenes with high real-time requirements (such as early warning).

The demand for processing data close to its origin led to increased popularity of the edge computing paradigm in research and industry. The main idea behind edge computing is to embed computing resources into the edge of the network, i.e., close to clients. Edge Computing is a Distributed Computing Model when computation takes place near a location where data is collected and analyzed, rather than on a Centralized Server or in the Cloud.  Compared to cloud computing, resources are thus available with low latency and bandwidth costs. For space-based systems, edge computing can save both time and energy. Energy is an even more precious resource in space than on Earth, so cutting down on transmissions, whether to relay information or run equations, can be hugely important. Preventing data from being transmitted to the cloud can also reduce privacy and security risks.

For example, sensor-equipped swarms of nanosatellites, such as ChipSats and CubeSats, can use edge systems to process the data they collect in low-Earth orbit without activating the satellite’s power-hungry radio. These satellite swarms, which fly about 250 to 370 miles above the Earth’s surface, can be clustered and organized to support important missions in the study of weather, climate science, national security and disaster response.

Fog and Edge Computing implementations

Satellite communication systems are flexible enough to be adapted for the implementation of Fog and Edge Computing. Implementation of Edge Computing in satellite telecommunication systems can be ensured by supplementing of the User Terminal or VSAT Terminal Modem with an additional Computing Module or Single-Board Computer. Structurally, a User Terminal or VSAT Terminal is a board with modem chips installed on it. Through modernization, such a design can be supplemented with a Single-Board Computer, which will provide the implementation of Edge Computing.

An alternative option is to connect a Single-Board Computer to an Ethernet-type Local Area Network with a Wi-Fi router being connected to it as well as other equipment of radio access technology for short-range IoT Smart Things. This added Computing Capacity will support the IoT Smart Things computing needs within the coverage of a short-range radio access network. In this case, only the results information about the IoT local information processing will be transmitted via a satellite communication channel.

One of the proposed architectures is a hierarchical network of satellite IoT supported by terrestrial data centers. Specifically, the cloud-edge stratified satellite IoT edge computing system consists of three parts: the satellite IoT cloud node, the satellite IoT edge node, and the ground data center. The satellite IoT edge nodes have computing and storage capabilities and use a common virtualization platform that can deploy different services as needed. Satellite IoT edge nodes can communicate with each other, and satellite IoT edge nodes and satellite IoT cloud nodes can cooperate with each other.

This can bring two benefits. First, satellite IoT edge nodes can request assistance from satellite IoT cloud nodes or ground data centers to offload their computing tasks to them. Second, satellite IoT edge nodes can also accept tasks from satellite IoT cloud nodes or terrestrial data centers, or establish fast service clusters with other satellite IoT edge nodes.

The implementation of Fog Computing in the satellite segment of IoT Systems is possible by supplementing the orbital segment with Computing Capacity for the Fog computing implementation. Supplementing the Orbital Segment of Satellite Communications Systems with Computing Capacity will allow the implementation of Fog computing for processing of the IoT Information accepted from IoT Smart Things located in the service area of the Satellite. As a result, the efficiency of information processing will increase, and the Delay Time will be reduced.

An alternative solution is the development and launch of GEO Satellite, with a Cloud Data Center Module as a Payload. These Satellites will be accessed via GEO Satellite-Repeaters according with Inter-Satellite Links. To increase data storage and computing operations liability, to increase cloud computing productivity, Satellite Cloud Computing Data Centers will be connected to ground-based Cloud Computing Data Centers provided with special high-speed secure radio links.

Satellite IoT cloud nodes have more powerful computing and storage capabilities than satellite IoT edge nodes. Satellite IoT cloud nodes are equipped with heterogeneous resources, such as CPU, GPU, and FPGA. It can not only handle various applications unloaded from satellite IoT edge nodes, but also complete task scheduling, task analysis, data fusion, intelligent distribution, and fast service cluster construction of the entire satellite network.

The ground data center has the capability of a large cloud computing center that can communicate with satellite IoT nodes or the ground Internet. Compared to satellite IoT nodes, the ground data centers have the highest computing power and the most storage resources.

Challenges for LEO edge

The highly dynamic nature of satellite constellations and their limited capacity for computational power means that existing edge computing platforms are not yet ready for being applied to the LEO edge.

Mobile Server Infrastructure. The servers attached to satellites in a LEO constellation is that these satellites orbit the earth at high speeds. For example, a satellite at an altitude of 550km must maintain a speed of 27,000km/h to maintain its orbit. Consequently, the servers also move at this speed. For the static ground station equipment, this means that they must frequently change their communication partner.

Same-model Servers. Then, satellites in a constellation are mostly the same model. The reason for this is that satellites orbit the earth continuously while the earth revolves beneath the satellite constellation. Thus, each satellite eventually covers each part of the earth which means that using different kinds of satellites for different regions is not possible. Subsequently, the servers must also be of the same model. It can be possible to upgrade server capabilities over time as satellites reach the end of their lifetime, yet developing different versions can have a negative impact on development and production costs.

Homogeneously Distributed Servers. Due to their non-geostationary nature, satellites are also homogeneously distributed across the globe, with satellites evenly spaced across an orbit. This means that each ground station has access to more or less the same amount of equally equipped satellites at all times.

Heterogeneous Demand. Nevertheless, demand is of course not homogenous across earth. Urban areas have a higher client density which increases resource demand compared to rural areas or oceans with a smaller client population.

Limited Compute Capabilities. As a consequence of being deployed in space, satellite servers’ capabilities must be limited. The reason for this is that energy consumption and heat generation must be kept low for economical reasons. Larger heat dissipation mechanisms, batteries, or solar arrays lead to higher weight and, subsequently, higher launch costs.

Another effect of placing servers on satellites in LEO is that those servers cannot be accessed for maintenance. Consequently, if a satellite or server fails, it remains failed and can only be de-orbited. As with cloud computing, developers expect their applications to be highly available in a LEO edge environment as well. Consequently, a LEO edge platform needs to abstract from the widely distributed and heterogeneous underlying infrastructure to provide fault-tolerance.

Fixed Server Capabilities. Not being able to access individual servers directly also means that they cannot be upgraded. Over the lifetime of a satellite, typically about 5 years, the server capabilities and, with it, the total capability of the constellation of servers, remain fixed.

Fixed Number of Servers. Horizontal scalability is also limited, as we can place servers only on satellites that are part of the constellation and the size of the constellation cannot be changed easily. Launching and deploying additional satellites requires approval by governmental agencies and competing space Internet companies may lobby to limit constellation sizes, especially as LEO is a limited resource

OrbitsEdge Plans Racks in Space

Florida-based OrbitsEdge is embracing a data center in orbit model, taking off-the-shelf rackmount servers and bolting them into a satellite bus (the structural frame housing payloads). “We’re both edge computing and data center,” said Rick Ward, Chief Technical Officer of OrbitsEdge. “We want to put big-performance computing infrastructure into space to process data, cleanse it, aggregate data from multiple sources and analyze it. We are that missing piece of the infrastructure to commercial space.”

OrbitsEdge is able to communicate with other satellites to collect and process their data, as well as performing overhead edge computing where a traditional data center is unavailable or not close enough. The company sees opportunities in offloading and storing data from Earth Observation satellites, processing it into immediately usable imagery, and sending the results directly to end-users in the field. It has had discussions with the U.S. Department of Defense, NASA, and commercial cloud providers on how such non-traditional resources could be useful for various use cases on Earth, in space, and on the surface of other celestial bodies.

“It’s another location for processing data above the clouds,” said Sylvia France, President of OrbitsEdge. “There’s a lot of interest in fintech, being able to make buy/sell decisions based on counting cars in parking lots. We’re also talking to entertainment companies as well, from space tourists to augmented reality firms.”

The OrbitsEdge SatFrame is the company’s proprietary satellite bus, with a standardized 19-inch server rack with available volume for 5U of hardware. The company’s first two SatFrame pathfinder satellites will support 18-inch deep hardware with production designs capable to grow to support full-sized 36 inch deep hardware.

“It’s another location for processing data above the clouds,” said Sylvia France, President of OrbitsEdge. “There’s a lot of interest in fintech, being able to make buy/sell decisions based on counting cars in parking lots. We’re also talking to entertainment companies as well, from space tourists to augmented reality firms.”

The OrbitsEdge SatFrame is the company’s proprietary satellite bus, with a standardized 19-inch server rack with available volume for 5U of hardware. The company’s first two SatFrame pathfinder satellites will support 18-inch deep hardware with production designs capable to grow to support full-sized 36 inch deep hardware.

LEOcloud establishes partnerships for satellite-based cloud computing

Satellite communications startup LEOcloud announced a partnership in July 2021 with supercomputer firm Ramon.Space to develop satellite-based cloud computing.

LEOcloud intends to offer “low latency, highly secure, high availability” cloud services, linking customers on the ground with “satellite data suppliers, hybrid cloud edge computing services and global connectivity” in Phase 1 of its strategy, according to a LEOcloud PowerPoint presentation. In Phase 2, LEOcloud, “will develop, launch and operate a satellite-based cloud infrastructure providing low latency, secure, high availability, mission-critical cloud services,” according to the presentation.

“Having access to data from space assets quickly and reliably is absolutely critical to the success of space missions,” Jonata Puglia, Leaf Space co-founder and CEO, said in a statement. Working with LEOcloud will enhance Leaf Space’s ground segment as a service business, he added.

AWS successfully runs AWS compute and machine learning services on an orbiting satellite

Amazon Web Services (AWS) announced in Nov 2022 that it successfully ran a suite of AWS compute and machine learning (ML) software on an orbiting satellite, in a first-of-its-kind space experiment. The experiment, conducted over the past 10 months in low Earth orbit (LEO), was designed to test a faster, more efficient method for customers to collect and analyze valuable space data directly on their orbiting satellites using the cloud.

Providing AWS edge capabilities onboard an orbiting satellite for the first time lets customers automatically analyze massive volumes of raw satellite data in orbit and only downlink the most useful images for storage and further analysis, driving down cost and enabling timely decision making.

“Using AWS software to perform real-time data analysis onboard an orbiting satellite, and delivering that analysis directly to decision makers via the cloud, is a definite shift in existing approaches to space data management. It also helps push the boundaries of what we believe is possible for satellite operations,” said Max Peterson, AWS vice president, worldwide public sector. “Providing powerful and secure cloud capability in space gives satellite operators the ability to communicate more efficiently with their spacecraft and deliver updated commands using AWS tools they’re familiar with.”

AWS is committed to eliminating technical challenges associated with operating in space, including high latency and limited-bandwidth networks. AWS collaborated with D-Orbit and Unibap, two of its global space partners, to directly address these challenges as they apply to satellite operations.

D-Orbit is a leader in the space logistics and transportation service industry and a member of the AWS Partner Network (APN). By applying AWS compute and machine learning services to Earth Observation (EO) imagery, D-Orbit was able to rapidly analyze large quantities of space data directly onboard its orbiting ION satellite.

“Our customers want to securely process increasingly large amounts of satellite data with very low latency,” said Sergio Mucciarelli, vice president of commercial sales of D-Orbit. “This is something that is limited by using legacy methods, downlinking all data for processing on the ground. We believe in the drive towards edge computing, and that it can only be done with space-based infrastructure that is fit for purpose, giving customers a high degree of confidence that they can run their workloads and operations reliably in the harsh space operating environment.”

The teams collaborated to build a software prototype that would include the tools they together identified as essential for the EO mission, including AWS ML models to analyze satellite imagery in real time, and AWS IoT Greengrass to provide cloud management and analytics even during periods of limited connectivity.

The AWS software prototype was integrated onto a space-qualified processing payload built by Unibap, a high-tech company based in Sweden and another AWS Partner. The Unibap processing payload was then integrated onto a D-Orbit ION satellite and launched into space.

“We want to help customers quickly turn raw satellite data into actionable information that can be used to disseminate alerts in seconds, enable onboard federated learning for autonomous information acquisition, and increase the value of data that is downlinked,” said Dr. Fredrik Bruhn, chief evangelist in digital transformation and co-founder of Unibap. “Providing users real-time access to AWS edge services and capabilities on orbit will allow them to gain more timely insights and optimize how they use their satellite and ground resources.”

Throughout the experiment, the team applied various ML models to satellite sensor data to quickly and automatically identify specific objects both in the sky – such as clouds and wildfire smoke – and objects on Earth including buildings and ships.

Raw satellite images and datasets like these are usually quite large, so the team created a way to break down the large data files into smaller ones. Using AWS AI and ML services helps reduce the size of images by up to 42 percent, increasing processing speeds and enabling real-time inferences on-orbit. The team managed the bidirectional movement of space data over multiple ground station contacts to provide allowance for an increased delay tolerance between communications. This was achieved by managing a reliable TCP/IP proxy between the satellite and the AWS Cloud. This modification made it simpler for ground crews to manage the file transfers automatically, without manually processing the downlinks over multiple contacts.

Market

As adoption of cloud computing continues, NSR’s latest report, Cloud Computing via Satellite, 2nd Edition (CCvS2) forecasts $21 billion cumulative cloud services revenues by 2030 across four key market segments. The impending wave of both LEO, MEO and GEO-HTS satcom services is set to significantly boost long-term cloud adoption and enhance market engagement opportunity, with 233 exabytes of traffic projected.

“The transformation brought about by the adoption of cloud computing is only beginning to impact the satellite sector,” states Shivaprakash Muruganandham, NSR Senior Analyst and report author. “Numerous verticals from cloud-hosted applications to cloud storage/processing by geospatial analytics providers, will see change and development.”

Partnership with large IT and cloud players drives market capture and growth opportunity across multiple segments. Within those core segments, Satellite Communications will continue to lead traffic via satellite, while Earth Observation data downlinks lead in revenues, representing a $10 billion opportunity, as ground station and data relay services enter the downlink market.

“While growth opportunity is forecast for both existing and nascent markets, cost scalability remains a challenge,” added NSR Analyst and Report co-author, Arthur Van Eeckhout. “However, cloud adoption is dramatically decreasing the legacy knowledge requirements for engagement, lowering the barriers to entry for space-derived data services. Today, start-ups born in the cloud have greater opportunities available to them than in the past.”

References and Resources also include:

https://datacenterfrontier.com/data-centers-above-the-clouds-colocation-goes-to-space/

https://spacenews.com/introducing-leocloud/

https://aws.amazon.com/blogs/publicsector/aws-successfully-runs-aws-compute-machine-learning-services-orbiting-satellite-first-space-experiment/

datacentercatalog.com

The Space Cloud: Satellite Strategies for AWS, Google and Microsoft

Source: Data Center Frontier

The Space Cloud: Satellite Strategies for AWS, Google and Microsoft

Cloud providers Amazon, Google, and Microsoft are building relationships with satellite companies at a rapid clip, working on two distinctly different objectives. Commercial satellite constellations generate loads of data which needs to be stored, processed, and analyzed, making them prime customers. Meanwhile, the expansion of edge computing beyond traditional terrestrial network connections is driving direct connections between data centers and satellite broadband ground stations to reduce latency and increase application speeds.

Given the early days and budding relationships between satellite broadband networks and cloud providers, it’s premature to talk about winners and losers, especially since more partnership announcements and new innovations are sure to come in the months and years ahead. Each cloud provider has a distinctive satellite strategy influenced as much by corporate style and pre-existing relationships as much as strategic vision.

Amazon: Big Data Today, World Connectivity Tomorrow

Seattle-based Amazon has not been shy about its ambitions to be the go-to cloud provider for satellite companies. In 2017, it convinced imaging firm Maxar to move its then-100 petabyte (PB) library into the Amazon cloud, a customer that now adds more than 80 TB of data per day from a fleet of high-resolution satellites. Amazon today lists IoT firms Hiber and Myriota, ship/plane tracking and weather data company Spire, and radar imaging satellite company Capella Space among its aerospace customers.

Two factors bringing satellite customers to Amazon are the AWS Ground Station (GS) satellite management service and the AWS Aerospace and Satellite Solutions dedicated business team. AWS Ground Station enables operators to send commands directly to satellite fleets and download large amounts of data from them straight into other AWS services, such as storage and AI/ML processing. Amazon provides a one-stop shop for satellite operations, able to provide spacecraft communications and control together with seamless cloud processing capabilities of downloaded data through one company instead of them having to build separate ground station networks and cloud operations.

Project Kuiper, Amazon’s future satellite broadband network, is the company’s own creation but will take a while to build and put into commercial service. Amazon is investing $10 billion to build and deploy an initial fleet of 3,236 broadband satellites along with ground equipment and low-cost high-speed end-user equipment. First satellite launches may occur in 2023 with initial service availability in 2024, but the company has not announced a formal timetable.

If Amazon has a weakness in its space strategy, it’s the lack of a satellite partner to provide bandwidth in the short-term until it gets Project Kuiper up and running. Microsoft and Google are teaming up with satellite broadband service providers to provide connectivity directly from the cloud to the edge in a seamless fashion, while Amazon doesn’t (yet) have partners it can work with.

Google: Investments Driving Space Relationships

Compared to Amazon and Microsoft, Google doesn’t have a large space project portfolio. Its current satellite relationships are rooted in long-standing investments rather than a larger business strategy.

Once upon a time – 2014, to be precise – Google agreed to buy an imaging satellite startup company known as Skybox Imaging. Renamed Terra Bella, the subsidiary operated a small fleet of high-resolution imaging satellites for Google. Three years later, Google had reorganized into Alphabet and was cleaning up its portfolio of not-Google companies. Terra Bella and its fleet of satellites were sold to Planet with Google agreeing to a multi-year contract to buy Earth imaging data from Planet, a win-win for both parties.

Less loudly trumped was Planet buying Google services. In 2020, Planet co-founder and Chief Strategy Officer Robbie Schingler said the company had a “long-term good deal” with Google as its primary cloud supplier. The imaging company is in the process of upgrading its 140-plus Dove satellite fleet to SuperDoves, increasing Planet’s daily satellite data generation from 10 TB/day to 40 TB/day. Storage, image processing, and AI/ML routines to examine growing catalog of of imagery for changes over time all occur on Google Cloud, adding up to petabytes of storage and CPU-intensive value-added services.

SpaceX’s announcement in May that it would directly connect its Starlink satellite broadband network to Google Cloud by putting ground stations at Google Cloud data centers seems obvious in retrospect. The financial relationship between SpaceX and Alphabet dates to 2015, when Google invested $900 million into SpaceX. Google also gained a SpaceX board seat that Google President of Global Partnerships and Corporate Development Don Harrison still holds as of 2021.

Roughly a year ago, Google was advertising for a “Partner Manager” to support satellite broadband service providers, helping to productize a solution and make it available to other satellite broadband ISPs. Is SpaceX partner number one? It would appear so.

Enterprise customers and edge computing platforms will be able to reach Google Cloud in a single hop from the SpaceX Starlink network, providing direct access to Google Cloud services with reduced latency and better overall performance and security. Enterprise customers with large geographic footprints, ranging from public sector agencies with rural offices to the U.S. military, will have access to low-latency high-speed broadband to connect them directly to Google Cloud resources.

Google Cloud plans to sell the combination of its services with SpaceX Starlink connectivity together for enterprise customers, according to a Google Cloud spokesperson, with details being worked out as the two targeting a formal launch in the second half of this year – an event that Microsoft may not be happy with, since it earlier announced it would offer SpaceX Starlink satellite connectivity as an option to connect Azure Module Data Centers in the field.

An illustration of Mynaric laser communications sending data across a constellation of low earth satellites. (Image: Mynaric)

Expect the Google Cloud/SpaceX Starlink partnership to tout single hop access to Google services and low broadband latency as the two key advantages of the combination, but there’s some fine print involved. SpaceX’s current generation of Starlink v1.0 satellites don’t have laser communications cross-links except for a limited number of polar orbiting spacecraft, with nearly all of the 1,500 currently in orbit simply moving data between users and the closest Starlink ground station in range, sending it along via a terrestrial network once on the ground.

To get the full one-hop advantage out of the Starlink broadband network, nearly all of the current 1,500 plus satellites will have to be upgraded with laser communications links so traffic stays within satellite network before being sent down to a Google Cloud data center with an on-premise Starlink ground station. SpaceX officials have said all polar orbit satellite launched this year and the next generation of Starlink satellites will include laser cross-links to keep data traffic within the network for faster transmission, but the company hasn’t discussed how fast the current fleet will get replaced.

Microsoft: Strategic Partnering to the Edge

Microsoft’s focus on enterprise and edge computing have placed satellite broadband as a key component in meeting the needs of its largest and most lucrative customers. It is also encroaching on Amazon’s play to be a one-stop-shop for satellite operators, adding Azure Orbital to offer “Ground station as a Service” competition with AWS GS. Azure Orbital offers the same functionality as AWS GS, enabling satellite operators to manage their own spacecraft through a single console instead of having to build out separate ground station facilities for control and data downloading.

Unlike Amazon and Google, Microsoft is publicly teaming with several aerospace-facing partners such as KSAT, ViaSat, and US Electrodynamics to provide a broader network of worldwide ground stations along with SES and SpaceX to provide satellite broadband connectivity to the edge for enterprise-class resources such as the Microsoft Azure Modular Data Center. SES is colocating ground stations at Microsoft Azure data centers to support the expansion of its O3b mPOWER medium earth orbit (MEO) broadband network, so Microsoft will be able provide direct one-hop access between its cloud and edge computing resources.

High-speed satellite broadband capability is essential for Microsoft to support government customers, such as the $10 billion JEDI Department of Defense cloud services contract. O3b and its mPOWER expansion can provide symmetrical broadband speeds up to 10 Gbps at latencies around 140 milliseconds (ms) anywhere in the world, from established base locations in Europe to ships nearly anywhere at sea.

SpaceX advocates will point out that Starlink’s low earth orbit (LEO) constellation can provide lower latencies of 20 to 40 ms because its satellites are closer to the earth and with the future potential to deliver up to 1 Gbps speeds, but SES O3b services are a proven commodity in operation over the past five years while Starlink is still in beta and only expected to reach downlink speeds of 300 Mbps and uplink speeds of 20 to 40 Mbps in the second half of this year.

But Microsoft is doing more than satellite broadband-data center integration, continuing integration work to real-world edge computing projects. Working with Ball Aerospace, Microsoft recently validating satellite-linked edge solutions for the Department of Defense. In a series of demonstrations, simulated data from imaging infrared imaging satellites was fed to Microsoft Azure cloud resources and processed using Ball-developed event driven software and delivered to multiple endpoints, according to a May 21 Microsoft press release.

A final demonstration used Telesat’s Lightspeed LEO demonstration satellite to directly move Azure-Ball processed data to a Ball-built Ka-band phased array antenna mounted on a tactical vehicle (i.e. a big SUV) and into an Azure Stack Edge device to prove Microsoft cloud information could be delivered directly to the field and processed there.

Future Partners and Connections: Edge Computing in the Sky

Both U.K. government-backed OneWeb and privately-owned Telesat are building out high-speed low latency LEO satellite networks for enterprise use. OneWeb is connecting its network to numerous telecommunications exchange points around the globe to ensure its customers have low-hop low-latency connections to Amazon, Google, and Microsoft while Telesat has discussed how it will build similar connections, using high-speed direct fiber connections between its ground stations and exchange points. Both companies may be examining ways to equal or better the Google Cloud-SpaceX connections, perhaps by working with Microsoft or cutting a short-term deal with Amazon to provide its corporate customers with multiple satellite connectivity options.

Placing edge computing resources on satellites is under consideration for some established cloud vendors and for many ambitious startups. NTT Docomo and satellite operator Sky Perfect JSAT announced a collaboration to build a “space integrated computing network” that would include a space data center built around satellites with high-capacity communications and computing while several new companies such as OrbitsEdge have discussed putting off-the-shelf servers into customized satellites for edge computing anywhere, either in orbit to process imaging from an Earth observation satellite or available overhead for users on the ground in places not suited for the footprint of a Microsoft Azure modular data center.

There’s no clean answer today as to the final mix or makeup of satellite-cloud connections. Will new higher-speed low-latency satellite broadband connections promote more edge backhaul and less edge computing? Or will there be a more balanced hybrid of edge computing and satellite-cloud connections? Data centers in space sound pretty hip, but what are the supporting business models to start throwing loaded racks into orbit? Nobody knows yet and there are many intriguing speculative scenarios.

More >> The Space Cloud: Satellite Strategies for AWS, Google and Microsoft

The concept of deploying datacenters on satellites for edge and fog computing is still in its early stages, but there are a number of companies that are working on it. One of the challenges that needs to be addressed is the high cost of launching and operating satellites. However, as launch costs come down, this technology could become more feasible.

One way to deploy datacenters on satellites is to use AWS Ground Stations. AWS Ground Stations is a service that provides high-bandwidth, low-latency connectivity to satellites. This means that you can connect your satellite to the AWS cloud and run your applications there.

There are a number of potential benefits to deploying datacenters on satellites for edge and fog computing. One benefit is that it can reduce latency. If you have a satellite that is close to the ground, the latency between the satellite and the user will be much lower than if the satellite is in geosynchronous orbit. This can be important for applications that require real-time data processing, such as self-driving cars or remote surgery.

Another benefit of deploying datacenters on satellites is that it can improve reliability. If a satellite fails, you can quickly replace it with another satellite. This is not possible with terrestrial data centers, which can be damaged by natural disasters or other events.

Of course, there are also some challenges to deploying datacenters on satellites. One challenge is that the environment in space is very harsh. Satellites are exposed to radiation, extreme temperatures, and microgravity. This means that the hardware and software that is used on satellites needs to be very robust.

Another challenge is that the bandwidth to and from satellites is limited. This means that you need to be careful about the amount of data that you transfer to and from the satellite.

Overall, the concept of deploying datacenters on satellites for edge and fog computing is still in its early stages. However, there are a number of potential benefits to this technology. As launch costs come down and the technology matures, this could become a more viable option for businesses that need to reduce latency and improve reliability.

Here are some specific use cases for deploying datacenters on satellites for edge and fog computing:

  • Remote sensing: Satellites can be used to collect data about the Earth's surface, such as weather data, land use data, and ocean data. This data can be processed on the satellite and then sent to the cloud for further analysis.
  • Telemedicine: Satellites can be used to provide telemedicine services to remote areas. Doctors can use the satellite to connect to patients and provide medical advice or treatment.
  • Self-driving cars: Satellites can be used to provide real-time data to self-driving cars. This data can be used to help the cars navigate and avoid obstacles.

These are just a few of the potential use cases for deploying datacenters on satellites for edge and fog computing. As the technology matures, we can expect to see even more use cases for this technology.

 Azure Orbital Ground Station – Satellite Comms | Microsoft Azure

S. Cao et al., "Space-Based Cloud-Fog Computing Architecture and Its Applications," 2019 IEEE World Congress on Services (SERVICES), Milan, Italy, 2019, pp. 166-171, doi: 10.1109/SERVICES.2019.00048.
Abstract: With the development of space technology, it is an inevitable trend to construct a flexible and efficient computing system for space-based information network. The traditional single-satellite computing can't meet the increasing computing requirements, and the single-satellite resource utilization rate is unsatisfactory. This paper proposes a creative space-based cloud-fog computing architecture, which combines the cloud-fog computing technology with the space-based information network. This architecture is expected to change the traditional mode of data processing that relies on ground nodes by introducing the space-based edge cloud and fog satellites. It can also significantly improve the computing and the service capabilities of on-orbit satellites. Using this computing architecture can save the transmission bandwidth of satellite-ground links and improve the real-time performance of time-sensitive service processing, bringing inspirations and innovations to the development of space information networks. In this paper, we also propose some applied scenarios and implement a simulation for verifying this computing architecture.
URL: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8817134&isnumber=8817055 

 

Monday, June 19, 2023

MQ-9B SeaGuardian® is rewriting the practice of sea power

 


 

MQ-9B SeaGuardian® is rewriting the practice of sea power

11h
MQ-9B Sky Guardian, Japan demo, Sept 3, 2020

GA-ASI’s MQ-9B SeaGuardian Remotely Piloted Aircraft

Sea power is a very old discipline, but unmanned aircraft are helping rewrite the way it’s done.

Leading the way is the MQ-9B SeaGuardian®, which enables the most advanced navies, coast guards, and other maritime authorities to patrol longer, detect more, and make existing units much more effective.

Manufactured by San Diego-based General Atomics Aeronautical Systems, Inc., SeaGuardian has recorded a number of recent first-ever achievements in a range of operational and test environments around the world. Even as users prove out what the system can do as it begins to enter widespread service, they’re only scratching the surface of the ways MQ-9B will alter the practice of sea power.

SeaGuardian has shown it can hunt for and help prosecute submarines. It escorts naval surface task groups. It provides sensing, targeting, and communications for the battle force. It self-deploys from its home station and integrates seamlessly into normal aviation traffic.

As the world’s aerospace leaders gather for the Paris Air Show, MQ-9B has validated and reaffirmed its role as a sea power enabler across the globe.

In just two years’ time, the aircraft has recorded more than 12,000 operational hours in the service of the Indian Navy.

MQ-9B provided security and surveillance for the recent G-7 summit at its island location in Hiroshima, Japan.

And in the spring, SeaGuardian joined the U.S. Navy for one of its most complex and challenging integrated exercises yet – one in which the aircraft joined with human-flown maritime helicopters in a major anti-submarine warfare exercise.

Sub-hunting

As part of this Integrated Battle Problem exercise in May, an MQ-9B aircraft flown by its crew from a ground control station and operated over satellite joined with U.S. Navy helicopter squadrons to search for submarines in a range off the coast of Southern California.

The exercise focused on manned-unmanned teaming, showing how a remotely operated aircraft can join traditional aircraft, such as U.S. Navy MH-60R Seahawk helicopters, in tackling complex missions. In this example, the helicopters released special sensors called sonobuoys, which dropped into the surface of the ocean to detect submarines. The SeaGuardian then took over monitoring the data sent by the sonobuoys.

2304_NavyLeagueSAS-SeaG_1200x1200px_300dpi_P09614

MQ-9B SeaGuardian shown in its ASW configuration dropping sonobuoys for submarine detection.

The sensors can listen for sounds or emit an active ping to search for undersea targets. They also network with battle force commanders to report what they’re detecting. SeaGuardian can monitor those network transmissions, release sonobuoys of its own, or both. The aircraft’s incredible endurance gives Navy commanders wide versatility in how to tackle a mission.

In the case of the recent exercise, helicopter crews flew out from San Diego, dropped their sonobuoys, and then SeaGuardian took over monitoring them. The sensors detected a simulated submarine. This meant other helicopters could deploy with precise data about the target’s location and course and then attack – in this case, dropping simulated torpedoes that sank the submarine for the purposes of the exercise.

Compare this process with an old-fashioned anti-submarine prosecution, in which human Navy crews might have needed to fly dozens of hours or more, wearing out people, aircraft, and equipment in order to prosecute a submarine. The remotely piloted MQ-9B does this at a fraction of the cost and with no crew on board to endanger over the water or in combat.

The U.S. Navy sub-hunting exercise was one of only several such exploits for SeaGuardian. Another involved the aircraft partnering with Navy carrier strike groups off the coast of Hawaii in April, working with warships, aircraft, and other units in order to ensure the safe passage of the surface ships.

Strike group integration

Carriers, cruisers, and destroyers, as well as F-35 Lightning II fighters, F/A-18E/F Super Hornets, EA-18G Growlers, E-2D Advanced Hawkeyes, MH-60 Seahawks, and P-8 Poseidons were all in the mix with MQ-9B SeaGuardian, which provided them with maritime domain awareness, information dominance, targeting capability, and more.

What the aircraft did, in effect, was serve as the distant eyes and ears for naval commanders. Its onboard sensors can see all through the visual and infrared spectrum, including – with its onboard multi-mode radar – through clouds, fog, mist, or smoke. Other onboard systems can hear throughout the radio frequency spectrum, collecting intelligence of all kinds that contributes to the most complete common operating picture possible.

No other large medium-altitude, long-endurance aircraft can contribute to sea power like this – and there are even more ways that SeaGuardian contributes. The aircraft’s proprietary Detect and Avoid System, invented by GA-ASI, means that it can operate in civil airspace just like any other aircraft. This eliminates the need for special arrangements or human-flown escort aircraft like those that remotely piloted aircraft might have needed in the past.

Also new: SeaGuardian self-deploys to far-flung operating areas. In each of the recent maritime exercises supported by MQ-9B, the aircraft took off from its home station in the California high desert and flew to the base where it was needed.

To participate in the Northern Edge exercise around Alaska in May, SeaGuardian flew more than 2,000 miles in a single hop, highlighting its extraordinary range and endurance. Then the MQ-9B flew its missions and took part in the various exercises and, when it was finished, returned home the same way, with a single flight. Compare that to older practices –in which an unmanned aircraft might have needed to be disassembled, boxed up, loaded into a cargo aircraft, flown to its operating location, and then reassembled there for use – and the savings in manpower, time, and operating costs are magnified.

Advanced onboard and supporting systems help make all this possible, including automatic takeoff and landing, artificial intelligence and machine learning, and cutting-edge networks. Satellite operations mean that MQ-9B’s pilots and crews can be located anywhere, even thousands of miles from where the aircraft is operating. During Northern Edge, for example, around Alaska, the crews flew the SeaGuardian from the Pacific Northwest area of the United States at Naval Air Station Whidbey Island.

This remote operation not only takes human crews off the aircraft and removes them from harm’s way at sea. It means MQ-9B can cover other inhospitable areas, such as the cold, ice-covered polar regions, without burdensome hardship deployments for crews or the necessity to position other units for search and rescue in case of a mishap. Taking the people off the aircraft protects them and their support units – all while reducing cost and complexity.

The big challenges of the 21st century to seafaring nations and the responsible use of the oceans aren’t simple or easy to tackle. But the good news is that navies, coast guards, and others charged with sea power, maritime domain awareness, search and rescue, and other related missions have a tool ready to meet those challenges head-on in the MQ-9B SeaGuardian.


  Create note

Breakthrough in Satellite Error Correction Improves Space Communications

Typical LEO Architecture and Segments Spectra of some LEO Link Losses Breakthrough in Satellite Error Correction Improves Space Communicatio...