Time-Discounted GAE
In semi-MDPs, each step has an associated duration. Instead of the usual value equation
\begin{equation} V(s_1) = r_1 + \gamma r_2 + \gamma^2 r_3 + ... \end{equation}
one discount based on step duration
\begin{equation} V_{\Delta t}(s_1) = \gamma^{\Delta t_1} r_1 + \gamma^{\Delta t_1 + \Delta t_2} r_2 + \gamma^{\Delta t_1 + \Delta t_2 + \Delta t_3} r_3 + ... \end{equation}
using the convention that reward is given at the end of a step.
The generalized advantage estimator can be rewritten accordingly. In our implementation, the exponential decay lambda
is per-step (as opposed to timewise).
RLlib Version
RLlib is actively developed and can change significantly from version to version. For this script, the following version is used:
[1]:
from importlib.metadata import version
version("ray") # Parent package of RLlib
[1]:
'2.35.0'
Define the Environment
A simple single-satellite environment is defined, as in :doc:examples/rllib_training
.
[2]:
import numpy as np
from bsk_rl import act, data, obs, sats, scene
from bsk_rl.sim import dyn, fsw
class ScanningDownlinkDynModel(
dyn.ContinuousImagingDynModel, dyn.GroundStationDynModel
):
# Define some custom properties to be accessed in the state
@property
def instrument_pointing_error(self) -> float:
r_BN_P_unit = self.r_BN_P / np.linalg.norm(self.r_BN_P)
c_hat_P = self.satellite.fsw.c_hat_P
return np.arccos(np.dot(-r_BN_P_unit, c_hat_P))
@property
def solar_pointing_error(self) -> float:
a = (
self.world.gravFactory.spiceObject.planetStateOutMsgs[self.world.sun_index]
.read()
.PositionVector
)
a_hat_N = a / np.linalg.norm(a)
nHat_B = self.satellite.sat_args["nHat_B"]
NB = np.transpose(self.BN)
nHat_N = NB @ nHat_B
return np.arccos(np.dot(nHat_N, a_hat_N))
class ScanningSatellite(sats.AccessSatellite):
observation_spec = [
obs.SatProperties(
dict(prop="storage_level_fraction"),
dict(prop="battery_charge_fraction"),
dict(prop="wheel_speeds_fraction"),
dict(prop="instrument_pointing_error", norm=np.pi),
dict(prop="solar_pointing_error", norm=np.pi),
),
obs.OpportunityProperties(
dict(prop="opportunity_open", norm=5700),
dict(prop="opportunity_close", norm=5700),
type="ground_station",
n_ahead_observe=1,
),
obs.Eclipse(norm=5700),
]
action_spec = [
act.Scan(duration=180.0),
act.Charge(duration=120.0),
act.Downlink(duration=60.0),
act.Desat(duration=60.0),
]
dyn_type = ScanningDownlinkDynModel
fsw_type = fsw.ContinuousImagingFSWModel
sat = ScanningSatellite(
"Scanner-1",
sat_args=dict(
# Data
dataStorageCapacity=5000 * 8e6, # bits
storageInit=lambda: np.random.uniform(0.0, 0.8) * 5000 * 8e6,
instrumentBaudRate=0.5 * 8e6,
transmitterBaudRate=-50 * 8e6,
# Power
batteryStorageCapacity=200 * 3600, # W*s
storedCharge_Init=lambda: np.random.uniform(0.3, 1.0) * 200 * 3600,
basePowerDraw=-10.0, # W
instrumentPowerDraw=-30.0, # W
transmitterPowerDraw=-25.0, # W
thrusterPowerDraw=-80.0, # W
panelArea=0.25,
# Attitude
imageAttErrorRequirement=0.1,
imageRateErrorRequirement=0.1,
disturbance_vector=lambda: np.random.normal(scale=0.0001, size=3), # N*m
maxWheelSpeed=6000.0, # RPM
wheelSpeeds=lambda: np.random.uniform(-3000, 3000, 3),
desatAttitude="nadir",
),
)
duration = 5 * 5700.0 # About 5 orbits
env_args = dict(
satellite=sat,
scenario=scene.UniformNadirScanning(value_per_second=1 / duration),
rewarder=data.ScanningTimeReward(),
time_limit=duration,
failure_penalty=-1.0,
terminate_on_time_limit=True,
)
RLlib Configuration
The configuration is mostly the same as in the standard example.
[3]:
import bsk_rl.utils.rllib # noqa To access "SatelliteTasking-RLlib"
from ray.rllib.algorithms.ppo import PPOConfig
N_CPUS = 3
training_args = dict(
lr=0.00003,
gamma=0.999,
train_batch_size=250,
num_sgd_iter=10,
model=dict(fcnet_hiddens=[512, 512], vf_share_layers=False),
lambda_=0.95,
use_kl_loss=False,
clip_param=0.1,
grad_clip=0.5,
reward_time="step_end",
)
config = (
PPOConfig()
.env_runners(num_env_runners=N_CPUS - 1, sample_timeout_s=1000.0)
.environment(
env="SatelliteTasking-RLlib",
env_config=env_args,
)
.reporting(
metrics_num_episodes_for_smoothing=1,
metrics_episode_collection_timeout_s=180,
)
.checkpointing(export_native_model_files=True)
.framework(framework="torch")
.api_stack(
enable_rl_module_and_learner=True,
enable_env_runner_and_connector_v2=True,
)
)
Rewards can also be distributed at the start of the step by setting reward_time="step_start"
.
The additional setting that must be configured is the appropriate learner class. This uses the d_ts
key from the info dict to discount by the step length, not just the step count.
[4]:
from bsk_rl.utils.rllib.discounting import TimeDiscountedGAEPPOTorchLearner
config.training(learner_class=TimeDiscountedGAEPPOTorchLearner)
[4]:
<ray.rllib.algorithms.ppo.ppo.PPOConfig at 0x7f69a7d31950>
Training can then proceed as normal.
[5]:
import ray
from ray import tune
ray.init(
ignore_reinit_error=True,
num_cpus=N_CPUS,
object_store_memory=2_000_000_000, # 2 GB
)
# Run the training
tune.run(
"PPO",
config=config.to_dict(),
stop={"training_iteration": 2}, # Adjust the number of iterations as needed
)
# Shutdown Ray
ray.shutdown()
2025-05-09 15:46:58,412 INFO worker.py:1783 -- Started a local Ray instance.
2025-05-09 15:46:59,190 INFO tune.py:616 -- [output] This uses the legacy output and progress reporter, as Jupyter notebooks are not supported by the new engine, yet. For more information, please see https://github.com/ray-project/ray/issues/36949
/opt/hostedtoolcache/Python/3.11.12/x64/lib/python3.11/site-packages/gymnasium/spaces/box.py:130: UserWarning: WARN: Box bound precision lowered by casting to float32
gym.logger.warn(f"Box bound precision lowered by casting to {self.dtype}")
/opt/hostedtoolcache/Python/3.11.12/x64/lib/python3.11/site-packages/gymnasium/utils/passive_env_checker.py:164: UserWarning: WARN: The obs returned by the `reset()` method was expecting numpy array dtype to be float32, actual type: float64
logger.warn(
/opt/hostedtoolcache/Python/3.11.12/x64/lib/python3.11/site-packages/gymnasium/utils/passive_env_checker.py:188: UserWarning: WARN: The obs returned by the `reset()` method is not within the observation space.
logger.warn(f"{pre} is not within the observation space.")
Tune Status
Current time: | 2025-05-09 15:48:36 |
Running for: | 00:01:37.22 |
Memory: | 4.2/15.6 GiB |
System Info
Using FIFO scheduling algorithm.Logical resource usage: 3.0/3 CPUs, 0/0 GPUs
Trial Status
Trial name | status | loc | iter | total time (s) | num_env_steps_sample d_lifetime | num_episodes_lifetim e | num_env_steps_traine d_lifetime |
---|---|---|---|---|---|---|---|
PPO_SatelliteTasking-RLlib_d7c0b_00000 | TERMINATED | 10.1.0.43:5917 | 2 | 86.6877 | 8000 | 43 | 8000 |
(PPO pid=5917) Install gputil for GPU system monitoring.
(SingleAgentEnvRunner pid=5965) 2025-05-09 15:47:11,062 sats.satellite.Scanner-1 WARNING <6540.00> Scanner-1: failed battery_valid check
(SingleAgentEnvRunner pid=5965) 2025-05-09 15:47:17,824 sats.satellite.Scanner-1 WARNING <9840.00> Scanner-1: failed battery_valid check [repeated 3x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/user-guides/configure-logging.html#log-deduplication for more options.)
(SingleAgentEnvRunner pid=5965) 2025-05-09 15:47:24,681 sats.satellite.Scanner-1 WARNING <21840.00> Scanner-1: failed battery_valid check [repeated 3x across cluster]
(SingleAgentEnvRunner pid=5965) 2025-05-09 15:47:31,315 sats.satellite.Scanner-1 WARNING <28020.00> Scanner-1: failed battery_valid check [repeated 3x across cluster]
(SingleAgentEnvRunner pid=5966) 2025-05-09 15:47:38,368 sats.satellite.Scanner-1 WARNING <26880.00> Scanner-1: failed battery_valid check [repeated 2x across cluster]
(SingleAgentEnvRunner pid=5966) 2025-05-09 15:47:44,209 sats.satellite.Scanner-1 WARNING <10440.00> Scanner-1: failed battery_valid check [repeated 3x across cluster]
Trial Progress
Trial name | env_runners | fault_tolerance | learners | num_agent_steps_sampled_lifetime | num_env_steps_sampled_lifetime | num_env_steps_trained_lifetime | num_episodes_lifetime | perf | timers |
---|---|---|---|---|---|---|---|---|---|
PPO_SatelliteTasking-RLlib_d7c0b_00000 | {'num_episodes': 23, 'episode_return_mean': -0.17308771929824562, 'agent_episode_returns_mean': {'default_agent': -0.17308771929824562}, 'num_env_steps_sampled_lifetime': 16000, 'num_agent_steps_sampled_lifetime': {'default_agent': 12000}, 'episode_len_max': 282, 'episode_return_max': 0.36592982456140355, 'module_episode_returns_mean': {'default_policy': -0.17308771929824562}, 'num_module_steps_sampled_lifetime': {'default_policy': 12000}, 'num_module_steps_sampled': {'default_policy': 4000}, 'episode_return_min': -0.7121052631578948, 'sample': np.float64(37.86604122726521), 'episode_len_min': 216, 'episode_len_mean': 249.0, 'num_agent_steps_sampled': {'default_agent': 4000}, 'episode_duration_sec_mean': 4.570918931500046, 'num_env_steps_sampled': 4000, 'time_between_sampling': np.float64(5.722578591999991)} | {'num_healthy_workers': 2, 'num_in_flight_async_reqs': 0, 'num_remote_worker_restarts': 0} | {'__all_modules__': {'num_env_steps_trained': 4000, 'num_non_trainable_parameters': 0.0, 'num_trainable_parameters': 139013.0, 'total_loss': -0.09955130517482758, 'num_module_steps_trained': 4000}, 'default_policy': {'num_module_steps_trained': 4000, 'num_non_trainable_parameters': 0.0, 'vf_loss_unclipped': 0.0002902592532336712, 'curr_entropy_coeff': 0.0, 'num_trainable_parameters': 139013.0, 'total_loss': -0.09955130517482758, 'vf_loss': 0.0002902592532336712, 'vf_explained_var': 0.001962721347808838, 'mean_kl_loss': 0.02008768543601036, 'policy_loss': -0.10385910421609879, 'entropy': 1.3456084728240967, 'default_optimizer_learning_rate': 5e-05, 'curr_kl_coeff': 0.30000001192092896}} | {'default_agent': 8000} | 8000 | 8000 | 43 | {'cpu_util_percent': np.float64(48.87540983606557), 'ram_util_percent': np.float64(27.23114754098361)} | {'env_runner_sampling_timer': 38.72821951912006, 'learner_update_timer': 4.820061714649947, 'synch_weights': 0.00632996742007208, 'synch_env_connectors': 0.006376612810079224} |
(SingleAgentEnvRunner pid=5965) 2025-05-09 15:47:53,966 sats.satellite.Scanner-1 WARNING <16380.00> Scanner-1: failed battery_valid check [repeated 3x across cluster]
(SingleAgentEnvRunner pid=5966) 2025-05-09 15:47:59,748 sats.satellite.Scanner-1 WARNING <21060.00> Scanner-1: failed battery_valid check [repeated 3x across cluster]
(SingleAgentEnvRunner pid=5965) 2025-05-09 15:48:07,834 sats.satellite.Scanner-1 WARNING <22680.00> Scanner-1: failed battery_valid check [repeated 3x across cluster]
(SingleAgentEnvRunner pid=5966) 2025-05-09 15:48:14,335 sats.satellite.Scanner-1 WARNING <8520.00> Scanner-1: failed battery_valid check [repeated 5x across cluster]
(SingleAgentEnvRunner pid=5966) 2025-05-09 15:48:21,112 sats.satellite.Scanner-1 WARNING <12720.00> Scanner-1: failed battery_valid check [repeated 3x across cluster]
(SingleAgentEnvRunner pid=5965) 2025-05-09 15:48:26,722 sats.satellite.Scanner-1 WARNING <18720.00> Scanner-1: failed battery_valid check [repeated 2x across cluster]
2025-05-09 15:48:36,446 INFO tune.py:1009 -- Wrote the latest version of all result files and experiment state to '/home/runner/ray_results/PPO_2025-05-09_15-46-59' in 0.0169s.
2025-05-09 15:48:36,644 INFO tune.py:1041 -- Total run time: 97.45 seconds (97.20 seconds for the tuning loop).
(SingleAgentEnvRunner pid=5965) 2025-05-09 15:48:30,725 sats.satellite.Scanner-1 WARNING <22560.00> Scanner-1: failed battery_valid check