message | DEBUG:commons:versio [...]DEBUG:commons:version: 6.1.7 *
INFO:typing:version: 6.1.8
DEBUG:aido_schemas:aido-protocols version 6.0.33 path /usr/local/lib/python3.8/dist-packages
INFO:nodes:version 6.1.1 path /usr/local/lib/python3.8/dist-packages pyparsing 2.4.6
2021-12-03 14:49:12.780345: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-12-03 14:49:14.395856: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2021-12-03 14:49:14.403006: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 14:49:14.403138: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: NVIDIA GeForce GTX 1080 computeCapability: 6.1
coreClock: 1.835GHz coreCount: 20 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 298.32GiB/s
2021-12-03 14:49:14.403166: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-12-03 14:49:14.404315: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2021-12-03 14:49:14.405267: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2021-12-03 14:49:14.405435: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2021-12-03 14:49:14.406633: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2021-12-03 14:49:14.407197: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2021-12-03 14:49:14.409310: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2021-12-03 14:49:14.409397: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 14:49:14.409567: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 14:49:14.410052: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2021-12-03 14:49:14.410538: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-12-03 14:49:14.416075: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3799900000 Hz
2021-12-03 14:49:14.416651: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7ea44c0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-12-03 14:49:14.416666: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2021-12-03 14:49:14.471622: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 14:49:14.471848: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7ccb990 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2021-12-03 14:49:14.471867: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA GeForce GTX 1080, Compute Capability 6.1
2021-12-03 14:49:14.472144: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 14:49:14.472262: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: NVIDIA GeForce GTX 1080 computeCapability: 6.1
coreClock: 1.835GHz coreCount: 20 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 298.32GiB/s
2021-12-03 14:49:14.472312: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-12-03 14:49:14.472349: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2021-12-03 14:49:14.472369: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2021-12-03 14:49:14.472385: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2021-12-03 14:49:14.472399: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2021-12-03 14:49:14.472412: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2021-12-03 14:49:14.472426: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2021-12-03 14:49:14.472476: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 14:49:14.472591: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 14:49:14.472669: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2021-12-03 14:49:14.472693: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-12-03 14:49:14.733688: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-12-03 14:49:14.733722: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0
2021-12-03 14:49:14.733729: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N
2021-12-03 14:49:14.733871: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 14:49:14.734065: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 14:49:14.734224: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1024 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
DEBUG:ipce:version 6.0.36 path /usr/local/lib/python3.8/dist-packages
INFO:nodes_wrapper:checking implementation
INFO:nodes_wrapper:checking implementation OK
DEBUG:nodes_wrapper:run_loop
fin: /fifos/ego0-in
fout: fifo:/fifos/ego0-out
INFO:nodes_wrapper:Fifo /fifos/ego0-out created. I will block until a reader appears.
INFO:nodes_wrapper:Fifo reader appeared for /fifos/ego0-out.
INFO:nodes_wrapper:Node RLlibAgent starting reading
fi_desc: /fifos/ego0-in
fo_desc: fifo:/fifos/ego0-out
INFO:nodes_wrapper:48d542d0cc84:RLlibAgent: init()
WARNING:config.config:Found paths with seed 3092:
WARNING:config.config:0: ./models/PPO-RLlib-AIDO5_FrameSkip3_NewMaps_StartAngle30_AIDOWrapper_DomainRand_3092/Dec10_00-31-47/config_dump_3092.yml
WARNING:config.config:Found checkpoints in ./models/PPO-RLlib-AIDO5_FrameSkip3_NewMaps_StartAngle30_AIDOWrapper_DomainRand_3092/Dec10_00-31-47:
WARNING:config.config:0: ./models/PPO-RLlib-AIDO5_FrameSkip3_NewMaps_StartAngle30_AIDOWrapper_DomainRand_3092/Dec10_00-31-47/PPO_0_2020-12-10_00-31-48u8cipgyq/checkpoint_363/checkpoint-363
WARNING:config.config:Config loaded from ./models/PPO-RLlib-AIDO5_FrameSkip3_NewMaps_StartAngle30_AIDOWrapper_DomainRand_3092/Dec10_00-31-47/config_dump_3092.yml
WARNING:config.config:Model checkpoint loaded from ./models/PPO-RLlib-AIDO5_FrameSkip3_NewMaps_StartAngle30_AIDOWrapper_DomainRand_3092/Dec10_00-31-47/PPO_0_2020-12-10_00-31-48u8cipgyq/checkpoint_363/checkpoint-363
WARNING:config.config:Updating default config values by:
env_config:
mode: inference
WARNING:config.config:Env_config.mode is 'inference', some hyperparameters will be overwritten by:
rllib_config:
num_workers: 0
num_gpus: 0
callbacks: {}
ray_init_config:
num_cpus: 1
memory: 2097152000
object_store_memory: 209715200
redis_max_memory: 209715200
local_mode: true
INFO:nodes_wrapper:48d542d0cc84:RLlibAgent: === Wrappers ===================================
INFO:nodes_wrapper:48d542d0cc84:RLlibAgent: Observation wrappers
<ClipImageWrapper<DummyDuckietownGymLikeEnv instance>>
<ResizeWrapper<ClipImageWrapper<DummyDuckietownGymLikeEnv instance>>>
<ObservationBufferWrapper<ResizeWrapper<ClipImageWrapper<DummyDuckietownGymLikeEnv instance>>>>
<NormalizeWrapper<ObservationBufferWrapper<ResizeWrapper<ClipImageWrapper<DummyDuckietownGymLikeEnv instance>>>>>
INFO:nodes_wrapper:48d542d0cc84:RLlibAgent: Action wrappers
<Heading2WheelVelsWrapper<NormalizeWrapper<ObservationBufferWrapper<ResizeWrapper<ClipImageWrapper<DummyDuckietownGymLikeEnv instance>>>>>>
INFO:nodes_wrapper:48d542d0cc84:RLlibAgent: Reward wrappers
INFO:nodes_wrapper:48d542d0cc84:RLlibAgent: === Config ===================================
INFO:nodes_wrapper:48d542d0cc84:RLlibAgent: seed: 3092
experiment_name: PPO-RLlib-AIDO5_FrameSkip3_NewMaps_StartAngle30_AIDOWrapper_DomainRand
algo: PPO
algo_config_files:
PPO: config/algo/ppo.yml
general: config/algo/general.yml
env_config:
mode: inference
episode_max_steps: 500
resized_input_shape: (84, 84)
crop_image_top: true
top_crop_divider: 3
grayscale_image: false
frame_stacking: true
frame_stacking_depth: 3
motion_blur: false
action_type: heading
reward_function: posangle
distortion: true
accepted_start_angle_deg: 30
simulation_framerate: 30
frame_skip: 3
action_delay_ratio: 0.0
training_map: multimap_aido5
domain_rand: true
dynamics_rand: true
camera_rand: true
frame_repeating: 0.0
spawn_obstacles: false
obstacles:
duckie:
density: 0.5
static: true
duckiebot:
density: 0
static: false
spawn_forward_obstacle: false
aido_wrapper: true
wandb:
project: duckietown-rllib
experiment_name: PPO-RLlib-AIDO5_FrameSkip3_NewMaps_StartAngle30_AIDOWrapper_DomainRand
seed: 3092
ray_init_config:
num_cpus: 1
webui_host: 127.0.0.1
memory: 2097152000
object_store_memory: 209715200
redis_max_memory: 209715200
local_mode: true
restore_seed: 3091
restore_experiment_idx: 0
restore_checkpoint_idx: 0
debug_hparams:
rllib_config:
num_workers: 1
num_gpus: 0
ray_init_config:
num_cpus: 1
memory: 2097152000
object_store_memory: 209715200
redis_max_memory: 209715200
local_mode: true
inference_hparams:
rllib_config:
num_workers: 0
num_gpus: 0
callbacks: {}
ray_init_config:
num_cpus: 1
memory: 2097152000
object_store_memory: 209715200
redis_max_memory: 209715200
local_mode: true
timesteps_total: 4000000.0
rllib_config:
num_workers: 0
sample_batch_size: 265
num_gpus: 0
train_batch_size: 4096
gamma: 0.99
lr: 5.0e-05
monitor: false
evaluation_interval: 25
evaluation_num_episodes: 2
evaluation_config:
monitor: false
explore: false
seed: 1234
lambda: 0.95
sgd_minibatch_size: 128
vf_loss_coeff: 0.5
entropy_coeff: 0.0
clip_param: 0.2
vf_clip_param: 0.2
grad_clip: 0.5
env: Duckietown
callbacks: {}
env_config:
mode: inference
episode_max_steps: 500
resized_input_shape: (84, 84)
crop_image_top: true
top_crop_divider: 3
grayscale_image: false
frame_stacking: true
frame_stacking_depth: 3
motion_blur: false
action_type: heading
reward_function: posangle
distortion: true
accepted_start_angle_deg: 30
simulation_framerate: 30
frame_skip: 3
action_delay_ratio: 0.0
training_map: multimap_aido5
domain_rand: true
dynamics_rand: true
camera_rand: true
frame_repeating: 0.0
spawn_obstacles: false
obstacles:
duckie:
density: 0.5
static: true
duckiebot:
density: 0
static: false
spawn_forward_obstacle: false
aido_wrapper: true
wandb:
project: duckietown-rllib
experiment_name: PPO-RLlib-AIDO5_FrameSkip3_NewMaps_StartAngle30_AIDOWrapper_DomainRand
seed: 3092
2021-12-03 14:49:15,023 INFO trainer.py:428 -- Tip: set 'eager': true or the --eager flag to enable TensorFlow eager execution
2021-12-03 14:49:15,032 ERROR syncer.py:39 -- Log sync requires rsync to be installed.
2021-12-03 14:49:15,033 WARNING deprecation.py:29 -- DeprecationWarning: `sample_batch_size` has been deprecated. Use `rollout_fragment_length` instead. This will raise an error in the future!
2021-12-03 14:49:15,033 INFO trainer.py:583 -- Current log_level is WARN. For more information, set 'log_level': 'INFO' / 'DEBUG' or use the -v and -vv flags.
2021-12-03 14:49:15.038686: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 14:49:15.038804: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: NVIDIA GeForce GTX 1080 computeCapability: 6.1
coreClock: 1.835GHz coreCount: 20 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 298.32GiB/s
2021-12-03 14:49:15.038838: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-12-03 14:49:15.038862: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2021-12-03 14:49:15.038876: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2021-12-03 14:49:15.038888: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2021-12-03 14:49:15.038900: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2021-12-03 14:49:15.038912: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2021-12-03 14:49:15.038926: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2021-12-03 14:49:15.038975: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 14:49:15.039083: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 14:49:15.039163: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2021-12-03 14:49:15.039187: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-12-03 14:49:15.039192: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0
2021-12-03 14:49:15.039196: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N
2021-12-03 14:49:15.039260: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 14:49:15.039375: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 14:49:15.039462: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1024 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
2021-12-03 14:49:15.285041: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2021-12-03 14:49:15.712230: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2021-12-03 14:49:17.671949: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 14:49:17.672068: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: NVIDIA GeForce GTX 1080 computeCapability: 6.1
coreClock: 1.835GHz coreCount: 20 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 298.32GiB/s
2021-12-03 14:49:17.672100: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-12-03 14:49:17.672122: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2021-12-03 14:49:17.672135: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2021-12-03 14:49:17.672148: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2021-12-03 14:49:17.672160: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2021-12-03 14:49:17.672171: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2021-12-03 14:49:17.672184: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2021-12-03 14:49:17.672230: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 14:49:17.672335: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 14:49:17.672408: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2021-12-03 14:49:17.672426: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-12-03 14:49:17.672431: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0
2021-12-03 14:49:17.672434: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N
2021-12-03 14:49:17.672487: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 14:49:17.672592: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 14:49:17.672672: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1024 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
2021-12-03 14:49:18,322 INFO trainable.py:217 -- Getting current IP.
2021-12-03 14:49:18,322 WARNING util.py:37 -- Install gputil for GPU system monitoring.
INFO:nodes_wrapper:48d542d0cc84:RLlibAgent: Restoring checkpoint from: ./models/PPO-RLlib-AIDO5_FrameSkip3_NewMaps_StartAngle30_AIDOWrapper_DomainRand_3092/Dec10_00-31-47/PPO_0_2020-12-10_00-31-48u8cipgyq/checkpoint_363/checkpoint-363
2021-12-03 14:49:18,362 INFO trainable.py:217 -- Getting current IP.
2021-12-03 14:49:18,363 INFO trainable.py:422 -- Restored on 172.17.0.2 from checkpoint: ./models/PPO-RLlib-AIDO5_FrameSkip3_NewMaps_StartAngle30_AIDOWrapper_DomainRand_3092/Dec10_00-31-47/PPO_0_2020-12-10_00-31-48u8cipgyq/checkpoint_363/checkpoint-363
2021-12-03 14:49:18,363 INFO trainable.py:430 -- Current state after restoring: {'_iteration': 363, '_timesteps_total': 1539120, '_time_total': 110224.9016327858, '_episodes_total': 8614}
INFO:nodes_wrapper:48d542d0cc84:RLlibAgent: Starting episode "episode".
1 Physical GPUs, 1 Logical GPUs
ERROR:nodes_wrapper:Error in node RLlibAgent:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 291, in loop
for parsed in inputs(fi, waiting_for=waiting_for):
File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/reading.py", line 20, in inputs
readyr, readyw, readyx = select.select([f], [], [f], intermediate_timeout)
File "/usr/local/lib/python3.8/dist-packages/ray/worker.py", line 881, in sigterm_handler
sys.exit(signal.SIGTERM)
SystemExit: Signals.SIGTERM
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 424, in loop
raise InternalProblem(msg) from e # XXX
zuper_nodes.structures.InternalProblem: Unexpected error:
| Traceback (most recent call last):
| File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 291, in loop
| for parsed in inputs(fi, waiting_for=waiting_for):
| File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/reading.py", line 20, in inputs
| readyr, readyw, readyx = select.select([f], [], [f], intermediate_timeout)
| File "/usr/local/lib/python3.8/dist-packages/ray/worker.py", line 881, in sigterm_handler
| sys.exit(signal.SIGTERM)
| SystemExit: Signals.SIGTERM
|
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 291, in loop
for parsed in inputs(fi, waiting_for=waiting_for):
File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/reading.py", line 20, in inputs
readyr, readyw, readyx = select.select([f], [], [f], intermediate_timeout)
File "/usr/local/lib/python3.8/dist-packages/ray/worker.py", line 881, in sigterm_handler
sys.exit(signal.SIGTERM)
SystemExit: Signals.SIGTERM
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 424, in loop
raise InternalProblem(msg) from e # XXX
zuper_nodes.structures.InternalProblem: Unexpected error:
| Traceback (most recent call last):
| File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 291, in loop
| for parsed in inputs(fi, waiting_for=waiting_for):
| File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/reading.py", line 20, in inputs
| readyr, readyw, readyx = select.select([f], [], [f], intermediate_timeout)
| File "/usr/local/lib/python3.8/dist-packages/ray/worker.py", line 881, in sigterm_handler
| sys.exit(signal.SIGTERM)
| SystemExit: Signals.SIGTERM
|
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "solution.py", line 127, in <module>
main()
File "solution.py", line 123, in main
wrap_direct(node=node, protocol=protocol)
File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
run_loop(node, protocol, args)
File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 251, in run_loop
raise Exception(msg) from e
Exception: Error in node RLlibAgent
|