Edge computing has become indispensable in the context of wireless network communication, particularly in scenarios with strained computing resource such as remote areas or areas where ground communication disruptions are prevalent. It significantly enhances the service quality of ground terminal devices. This paper proposes a Space-Air-Ground Integrated Network(SAGIN) edge computing model. However, the limited computational resources pose challenges to crucial aspects such as edge server deployment, edge server computing resource allocation, and offloading strategies in such scenarios, becoming pivotal in boosting network service capabilities. To address this, we explore different deep reinforcement learning algorithms to train the model towards optimizing system objectives. Among them, the Soft-Actor-Critic (SAC) deep reinforcement learning algorithm demonstrates superior applicability to such model problems, improving training effectiveness by approximately 18% compared to the DDPG algorithm (calculated using average reward value). The edge computing offloading technique also serves as a crucial foundation for future edge computing cross-domain interconnections and distributed task collaboration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.