Duckietown Challenges Home Challenges Submissions

Submission 18222

Submission18222
Competingyes
Challengeaido-LFI-real-validation
UserAndrás Kalapos 🇭🇺
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
Jobseval0: 119268 eval0-videos: 119278 eval0-visualize: 119275 eval1: 119290 eval1-videos: 119293 eval1-visualize: 119296 eval2: 119361 eval2-videos: 119397 eval2-visualize: 119395
Next
User label3090
Admin priority50
Blessingn/a
User priority50

119397

robot

watchtower

119395

Click the images to see detailed statistics about the episode.

sync

visualization

119361

scenario/drawing.html

119296

Click the images to see detailed statistics about the episode.

sync

visualization

119293

robot

watchtower

119290

scenario/drawing.html

119278

robot

watchtower

119275

Click the images to see detailed statistics about the episode.

sync

visualization

119268

scenario/drawing.html

Evaluation jobs for this submission

See previous jobs for previous versions of challenges
Job IDstepstatusup to datedate starteddate completeddurationmessage
119397eval2-videossuccessyes0:02:41
all ok, 8 bags proce [...]
all ok, 8 bags processed
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
119396eval2-videossuccessyes0:03:12
all ok, 8 bags proce [...]
all ok, 8 bags processed
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
119395eval2-visualizesuccessyes0:01:11
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec0.25709679196707946
survival_time12.399722576141356
deviation-center-line0.49680018267697745
in-drivable-lane6.899832248687744


other stats
deviation-heading6.238838102656264
distance-from-start1.530795705794213
driven_any4.603941419529944
driven_lanedir0.6484885180805663
visualized-eval2-passed1
No reset possible
119394eval2-visualizesuccessyes0:01:11
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
119393eval2-visualizesuccessyes0:01:09
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
119392eval2-visualizesuccessyes0:01:01
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
119391eval2-visualizesuccessyes0:01:06
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
119390eval2-visualizesuccessyes0:02:45
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
119361eval2successyes0:02:33
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
119296eval1-visualizesuccessyes0:01:19
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec0.4242953731742619
survival_time9.999890089035034
deviation-center-line0.18835917640081137
in-drivable-lane8.39999794960022


other stats
deviation-heading0.4286216031707824
distance-from-start1.1097421219447314
driven_any4.519850471382746
driven_lanedir0.4242953731742619
visualized-eval1-passed1
No reset possible
119295eval1-visualizesuccessyes0:01:34
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
119294eval1-visualizesuccessyes0:01:43
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
119293eval1-videossuccessyes0:03:12
all ok, 8 bags proce [...]
all ok, 8 bags processed
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
119290eval1successyes0:02:18
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
119280eval1abortedyes0:03:27
DEBUG:commons:versio [...]
DEBUG:commons:version: 6.1.7 *
INFO:typing:version: 6.1.8
DEBUG:aido_schemas:aido-protocols version 6.0.33 path /usr/local/lib/python3.8/dist-packages
INFO:nodes:version 6.1.1 path /usr/local/lib/python3.8/dist-packages pyparsing 2.4.6
2021-12-03 15:30:18.360842: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-12-03 15:30:19.896758: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2021-12-03 15:30:19.920712: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 15:30:19.920824: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: 
pciBusID: 0000:01:00.0 name: NVIDIA GeForce GTX 1080 computeCapability: 6.1
coreClock: 1.835GHz coreCount: 20 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 298.32GiB/s
2021-12-03 15:30:19.920852: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-12-03 15:30:19.922717: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2021-12-03 15:30:19.923746: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2021-12-03 15:30:19.923960: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2021-12-03 15:30:19.925124: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2021-12-03 15:30:19.925966: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2021-12-03 15:30:19.929009: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2021-12-03 15:30:19.929102: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 15:30:19.929240: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 15:30:19.929317: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2021-12-03 15:30:19.929606: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-12-03 15:30:19.950171: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3799900000 Hz
2021-12-03 15:30:19.950761: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fa5d10 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-12-03 15:30:19.950780: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2021-12-03 15:30:19.991783: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 15:30:19.992024: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7dd80b0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2021-12-03 15:30:19.992051: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): NVIDIA GeForce GTX 1080, Compute Capability 6.1
2021-12-03 15:30:19.992318: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 15:30:19.992421: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: 
pciBusID: 0000:01:00.0 name: NVIDIA GeForce GTX 1080 computeCapability: 6.1
coreClock: 1.835GHz coreCount: 20 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 298.32GiB/s
2021-12-03 15:30:19.992463: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-12-03 15:30:19.992513: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2021-12-03 15:30:19.992609: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2021-12-03 15:30:19.992692: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2021-12-03 15:30:19.992759: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2021-12-03 15:30:19.992824: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2021-12-03 15:30:19.992887: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2021-12-03 15:30:19.993017: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 15:30:19.993212: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 15:30:19.993354: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2021-12-03 15:30:19.993417: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-12-03 15:30:20.240839: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-12-03 15:30:20.240864: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263]      0 
2021-12-03 15:30:20.240870: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0:   N 
2021-12-03 15:30:20.240992: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 15:30:20.241139: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 15:30:20.241257: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1024 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
DEBUG:ipce:version 6.0.36 path /usr/local/lib/python3.8/dist-packages
INFO:nodes_wrapper:checking implementation
INFO:nodes_wrapper:checking implementation OK
DEBUG:nodes_wrapper:run_loop
  fin: /fifos/ego0-in
 fout: fifo:/fifos/ego0-out
INFO:nodes_wrapper:Fifo /fifos/ego0-out created. I will block until a reader appears.
INFO:nodes_wrapper:Fifo reader appeared for /fifos/ego0-out.
INFO:nodes_wrapper:Node RLlibAgent starting reading
 fi_desc: /fifos/ego0-in
 fo_desc: fifo:/fifos/ego0-out
INFO:nodes_wrapper:65294ff3a15c:RLlibAgent: init()
WARNING:config.config:Found paths with seed 3090:
WARNING:config.config:0: ./models/PPO-RLlib-AIDO5_FrameSkip3_NewMaps_StartAngle45_AIDOWrapper_3090/Dec08_00-58-57/config_dump_3090.yml
WARNING:config.config:Found checkpoints in ./models/PPO-RLlib-AIDO5_FrameSkip3_NewMaps_StartAngle45_AIDOWrapper_3090/Dec08_00-58-57:
WARNING:config.config:0: ./models/PPO-RLlib-AIDO5_FrameSkip3_NewMaps_StartAngle45_AIDOWrapper_3090/Dec08_00-58-57/PPO_0_2020-12-08_00-58-5910v5awty/checkpoint_224/checkpoint-224
WARNING:config.config:1: ./models/PPO-RLlib-AIDO5_FrameSkip3_NewMaps_StartAngle45_AIDOWrapper_3090/Dec08_00-58-57/PPO_0_2020-12-08_00-58-5910v5awty/checkpoint_230/checkpoint-230
WARNING:config.config:Config loaded from ./models/PPO-RLlib-AIDO5_FrameSkip3_NewMaps_StartAngle45_AIDOWrapper_3090/Dec08_00-58-57/config_dump_3090.yml
WARNING:config.config:Model checkpoint loaded from ./models/PPO-RLlib-AIDO5_FrameSkip3_NewMaps_StartAngle45_AIDOWrapper_3090/Dec08_00-58-57/PPO_0_2020-12-08_00-58-5910v5awty/checkpoint_224/checkpoint-224
WARNING:config.config:Updating default config values by: 
 env_config:
  mode: inference

WARNING:config.config:Env_config.mode is 'inference', some hyperparameters will be overwritten by: 
 rllib_config:
  num_workers: 0
  num_gpus: 0
  callbacks: {}
ray_init_config:
  num_cpus: 1
  memory: 2097152000
  object_store_memory: 209715200
  redis_max_memory: 209715200
  local_mode: true

INFO:nodes_wrapper:65294ff3a15c:RLlibAgent: === Wrappers ===================================
INFO:nodes_wrapper:65294ff3a15c:RLlibAgent: Observation wrappers
 <ClipImageWrapper<DummyDuckietownGymLikeEnv instance>>
<ResizeWrapper<ClipImageWrapper<DummyDuckietownGymLikeEnv instance>>>
<ObservationBufferWrapper<ResizeWrapper<ClipImageWrapper<DummyDuckietownGymLikeEnv instance>>>>
<NormalizeWrapper<ObservationBufferWrapper<ResizeWrapper<ClipImageWrapper<DummyDuckietownGymLikeEnv instance>>>>>
INFO:nodes_wrapper:65294ff3a15c:RLlibAgent: Action wrappers
 <Heading2WheelVelsWrapper<NormalizeWrapper<ObservationBufferWrapper<ResizeWrapper<ClipImageWrapper<DummyDuckietownGymLikeEnv instance>>>>>>
INFO:nodes_wrapper:65294ff3a15c:RLlibAgent: Reward wrappers
 
INFO:nodes_wrapper:65294ff3a15c:RLlibAgent: === Config ===================================
INFO:nodes_wrapper:65294ff3a15c:RLlibAgent: seed: 3090
experiment_name: PPO-RLlib-AIDO5_FrameSkip3_NewMaps_StartAngle45_AIDOWrapper
algo: PPO
algo_config_files:
  PPO: config/algo/ppo.yml
  general: config/algo/general.yml
env_config:
  mode: inference
  episode_max_steps: 500
  resized_input_shape: (84, 84)
  crop_image_top: true
  top_crop_divider: 3
  grayscale_image: false
  frame_stacking: true
  frame_stacking_depth: 3
  motion_blur: false
  action_type: heading
  reward_function: posangle
  distortion: true
  accepted_start_angle_deg: 45
  simulation_framerate: 30
  frame_skip: 3
  action_delay_ratio: 0.0
  training_map: multimap_aido5
  domain_rand: false
  dynamics_rand: false
  camera_rand: false
  frame_repeating: 0.0
  spawn_obstacles: false
  obstacles:
    duckie:
      density: 0.5
      static: true
    duckiebot:
      density: 0
      static: false
  spawn_forward_obstacle: false
  aido_wrapper: true
  wandb:
    project: duckietown-rllib
  experiment_name: PPO-RLlib-AIDO5_FrameSkip3_NewMaps_StartAngle45_AIDOWrapper
  seed: 3090
ray_init_config:
  num_cpus: 1
  webui_host: 127.0.0.1
  memory: 2097152000
  object_store_memory: 209715200
  redis_max_memory: 209715200
  local_mode: true
restore_seed: -1
restore_experiment_idx: 0
restore_checkpoint_idx: 0
debug_hparams:
  rllib_config:
    num_workers: 1
    num_gpus: 0
  ray_init_config:
    num_cpus: 1
    memory: 2097152000
    object_store_memory: 209715200
    redis_max_memory: 209715200
    local_mode: true
inference_hparams:
  rllib_config:
    num_workers: 0
    num_gpus: 0
    callbacks: {}
  ray_init_config:
    num_cpus: 1
    memory: 2097152000
    object_store_memory: 209715200
    redis_max_memory: 209715200
    local_mode: true
timesteps_total: 3000000.0
rllib_config:
  num_workers: 0
  sample_batch_size: 265
  num_gpus: 0
  train_batch_size: 4096
  gamma: 0.99
  lr: 5.0e-05
  monitor: false
  evaluation_interval: 25
  evaluation_num_episodes: 2
  evaluation_config:
    monitor: false
    explore: false
  seed: 1234
  lambda: 0.95
  sgd_minibatch_size: 128
  vf_loss_coeff: 0.5
  entropy_coeff: 0.0
  clip_param: 0.2
  vf_clip_param: 0.2
  grad_clip: 0.5
  env: Duckietown
  callbacks: {}
  env_config:
    mode: inference
    episode_max_steps: 500
    resized_input_shape: (84, 84)
    crop_image_top: true
    top_crop_divider: 3
    grayscale_image: false
    frame_stacking: true
    frame_stacking_depth: 3
    motion_blur: false
    action_type: heading
    reward_function: posangle
    distortion: true
    accepted_start_angle_deg: 45
    simulation_framerate: 30
    frame_skip: 3
    action_delay_ratio: 0.0
    training_map: multimap_aido5
    domain_rand: false
    dynamics_rand: false
    camera_rand: false
    frame_repeating: 0.0
    spawn_obstacles: false
    obstacles:
      duckie:
        density: 0.5
        static: true
      duckiebot:
        density: 0
        static: false
    spawn_forward_obstacle: false
    aido_wrapper: true
    wandb:
      project: duckietown-rllib
    experiment_name: PPO-RLlib-AIDO5_FrameSkip3_NewMaps_StartAngle45_AIDOWrapper
    seed: 3090

2021-12-03 15:30:20,524	INFO trainer.py:428 -- Tip: set 'eager': true or the --eager flag to enable TensorFlow eager execution
2021-12-03 15:30:20,533	ERROR syncer.py:39 -- Log sync requires rsync to be installed.
2021-12-03 15:30:20,533	WARNING deprecation.py:29 -- DeprecationWarning: `sample_batch_size` has been deprecated. Use `rollout_fragment_length` instead. This will raise an error in the future!
2021-12-03 15:30:20,533	INFO trainer.py:583 -- Current log_level is WARN. For more information, set 'log_level': 'INFO' / 'DEBUG' or use the -v and -vv flags.
2021-12-03 15:30:20.539435: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 15:30:20.539557: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: 
pciBusID: 0000:01:00.0 name: NVIDIA GeForce GTX 1080 computeCapability: 6.1
coreClock: 1.835GHz coreCount: 20 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 298.32GiB/s
2021-12-03 15:30:20.539600: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-12-03 15:30:20.539636: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2021-12-03 15:30:20.539658: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2021-12-03 15:30:20.539681: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2021-12-03 15:30:20.539702: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2021-12-03 15:30:20.539725: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2021-12-03 15:30:20.539748: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2021-12-03 15:30:20.539811: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 15:30:20.539931: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 15:30:20.540013: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2021-12-03 15:30:20.540034: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-12-03 15:30:20.540041: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263]      0 
2021-12-03 15:30:20.540047: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0:   N 
2021-12-03 15:30:20.540117: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 15:30:20.540244: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 15:30:20.540332: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1024 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
2021-12-03 15:30:20.777539: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2021-12-03 15:30:21.201034: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2021-12-03 15:30:23.143065: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 15:30:23.143180: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: 
pciBusID: 0000:01:00.0 name: NVIDIA GeForce GTX 1080 computeCapability: 6.1
coreClock: 1.835GHz coreCount: 20 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 298.32GiB/s
2021-12-03 15:30:23.143213: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-12-03 15:30:23.143236: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2021-12-03 15:30:23.143251: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2021-12-03 15:30:23.143264: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2021-12-03 15:30:23.143276: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2021-12-03 15:30:23.143289: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2021-12-03 15:30:23.143302: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2021-12-03 15:30:23.143351: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 15:30:23.143457: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 15:30:23.143530: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2021-12-03 15:30:23.143548: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-12-03 15:30:23.143554: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263]      0 
2021-12-03 15:30:23.143557: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0:   N 
2021-12-03 15:30:23.143611: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 15:30:23.143718: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-03 15:30:23.143798: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1024 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
2021-12-03 15:30:23,785	INFO trainable.py:217 -- Getting current IP.
2021-12-03 15:30:23,785	WARNING util.py:37 -- Install gputil for GPU system monitoring.
INFO:nodes_wrapper:65294ff3a15c:RLlibAgent: Restoring checkpoint from: ./models/PPO-RLlib-AIDO5_FrameSkip3_NewMaps_StartAngle45_AIDOWrapper_3090/Dec08_00-58-57/PPO_0_2020-12-08_00-58-5910v5awty/checkpoint_224/checkpoint-224
2021-12-03 15:30:23,827	INFO trainable.py:217 -- Getting current IP.
2021-12-03 15:30:23,827	INFO trainable.py:422 -- Restored on 172.17.0.2 from checkpoint: ./models/PPO-RLlib-AIDO5_FrameSkip3_NewMaps_StartAngle45_AIDOWrapper_3090/Dec08_00-58-57/PPO_0_2020-12-08_00-58-5910v5awty/checkpoint_224/checkpoint-224
2021-12-03 15:30:23,827	INFO trainable.py:430 -- Current state after restoring: {'_iteration': 224, '_timesteps_total': 949760, '_time_total': 74064.6750099659, '_episodes_total': 4862}
INFO:nodes_wrapper:65294ff3a15c:RLlibAgent: Starting episode "episode".
ERROR:nodes_wrapper:Error in node RLlibAgent: 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 355, in loop
    handle_message_node(parsed, receiver0, context0)
  File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 517, in handle_message_node
    expected = pc.get_expected_events()
  File "/usr/local/lib/python3.8/dist-packages/zuper_nodes/language_recognize.py", line 220, in get_expected_events
    events.add(em)
  File "<string>", line 3, in __hash__
  File "/usr/local/lib/python3.8/dist-packages/ray/worker.py", line 881, in sigterm_handler
    sys.exit(signal.SIGTERM)
SystemExit: Signals.SIGTERM

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
    loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
  File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 378, in loop
    raise InternalProblem(msg) from e  # XXX
zuper_nodes.structures.InternalProblem: Exception while handling a message on topic "observations".

| Traceback (most recent call last):
|   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 355, in loop
|     handle_message_node(parsed, receiver0, context0)
|   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 517, in handle_message_node
|     expected = pc.get_expected_events()
|   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes/language_recognize.py", line 220, in get_expected_events
|     events.add(em)
|   File "<string>", line 3, in __hash__
|   File "/usr/local/lib/python3.8/dist-packages/ray/worker.py", line 881, in sigterm_handler
|     sys.exit(signal.SIGTERM)
| SystemExit: Signals.SIGTERM
| 

1 Physical GPUs, 1 Logical GPUs
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 355, in loop
    handle_message_node(parsed, receiver0, context0)
  File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 517, in handle_message_node
    expected = pc.get_expected_events()
  File "/usr/local/lib/python3.8/dist-packages/zuper_nodes/language_recognize.py", line 220, in get_expected_events
    events.add(em)
  File "<string>", line 3, in __hash__
  File "/usr/local/lib/python3.8/dist-packages/ray/worker.py", line 881, in sigterm_handler
    sys.exit(signal.SIGTERM)
SystemExit: Signals.SIGTERM

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
    loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
  File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 378, in loop
    raise InternalProblem(msg) from e  # XXX
zuper_nodes.structures.InternalProblem: Exception while handling a message on topic "observations".

| Traceback (most recent call last):
|   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 355, in loop
|     handle_message_node(parsed, receiver0, context0)
|   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 517, in handle_message_node
|     expected = pc.get_expected_events()
|   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes/language_recognize.py", line 220, in get_expected_events
|     events.add(em)
|   File "<string>", line 3, in __hash__
|   File "/usr/local/lib/python3.8/dist-packages/ray/worker.py", line 881, in sigterm_handler
|     sys.exit(signal.SIGTERM)
| SystemExit: Signals.SIGTERM
| 

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "solution.py", line 126, in <module>
    main()
  File "solution.py", line 122, in main
    wrap_direct(node=node, protocol=protocol)
  File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
    run_loop(node, protocol, args)
  File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 251, in run_loop
    raise Exception(msg) from e
Exception: Error in node RLlibAgent
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
119278eval0-videossuccessyes0:01:28
all ok, 8 bags proce [...]
all ok, 8 bags processed
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
119277eval0-videossuccessyes0:02:03
all ok, 8 bags proce [...]
all ok, 8 bags processed
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
119275eval0-visualizesuccessyes0:00:55
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec0.3787888423176961
survival_time5.1003289222717285
deviation-center-line0.10718374090678164
in-drivable-lane3.599940061569214


other stats
deviation-heading0.7159383852854795
distance-from-start1.0621580116231366
driven_any2.338911874675153
driven_lanedir0.4356216738609817
visualized-eval0-passed1
No reset possible
119274eval0-visualizesuccessyes0:01:01
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
119273eval0-visualizesuccessyes0:01:02
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
119272eval0-visualizesuccessyes0:00:57
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
119270eval0-visualizesuccessyes0:02:11
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
119268eval0successyes0:02:09
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible