Duckietown Challenges Home Challenges Submissions

Evaluator 5179

ID5179
evaluatorgpu-production-spot-0-02
ownerI don't have one πŸ˜€
machinegpu-prod_ad9eeed9ae41
processgpu-production-spot-0-02_ad9eeed9ae41
version6.2.7
first heard
last heard
statusinactive
# evaluating
# success156 70624
# timeout1 71423
# failed26 75094
# error
# aborted4 70988
# host-error9 73824
arm0
x86_641
Mac0
gpu available1
Number of processors64
Processor frequency (MHz)0.0 GHz
Free % of processors99%
RAM total (MB)249.0 GB
RAM free (MB)186.5 GB
Disk (MB)969.3 GB
Disk available (MB)588.4 GB
Docker Hub
P11
P2
Cloud simulations1
PI Camera0
# Duckiebots0
Map 3x3 avaiable
Number of duckies
gpu cores
AIDO 2 Map LF public
AIDO 2 Map LF private
AIDO 2 Map LFV public
AIDO 2 Map LFV private
AIDO 2 Map LFVI public
AIDO 2 Map LFVI private
AIDO 3 Map LF public
AIDO 3 Map LF private
AIDO 3 Map LFV public
AIDO 3 Map LFV private
AIDO 3 Map LFVI public
AIDO 3 Map LFVI private
AIDO 5 Map large loop
ETU track
for 2021, map is ETH_small_inter
IPFS mountpoint /ipfs available
IPNS mountpoint /ipns available

Evaluator jobs

Job IDsubmissionuseruser labelchallengestepstatusup to dateevaluatordate starteddate completeddurationmessage
7546814887Liam Hanexercises_braitenbergmooc-BV1sim-1of5successnogpu-production-spot-0-020:02:27
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean2.0868386304291437


other stats
agent_compute-ego0_max0.0118650230201515
agent_compute-ego0_mean0.0118650230201515
agent_compute-ego0_median0.0118650230201515
agent_compute-ego0_min0.0118650230201515
complete-iteration_max0.2712481911117966
complete-iteration_mean0.2712481911117966
complete-iteration_median0.2712481911117966
complete-iteration_min0.2712481911117966
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max2.0868386304291437
distance-from-start_median2.0868386304291437
distance-from-start_min2.0868386304291437
driven_any_max2.447249212321108
driven_any_mean2.447249212321108
driven_any_median2.447249212321108
driven_any_min2.447249212321108
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.12339060757611248
get_duckie_state_mean0.12339060757611248
get_duckie_state_median0.12339060757611248
get_duckie_state_min0.12339060757611248
get_robot_state_max0.003938869527868322
get_robot_state_mean0.003938869527868322
get_robot_state_median0.003938869527868322
get_robot_state_min0.003938869527868322
get_state_dump_max0.024460293795611408
get_state_dump_mean0.024460293795611408
get_state_dump_median0.024460293795611408
get_state_dump_min0.024460293795611408
get_ui_image_max0.015910228523048194
get_ui_image_mean0.015910228523048194
get_ui_image_median0.015910228523048194
get_ui_image_min0.015910228523048194
in-drivable-lane_max9.199999999999996
in-drivable-lane_mean9.199999999999996
in-drivable-lane_median9.199999999999996
in-drivable-lane_min9.199999999999996
per-episodes
details{"d60-ego0": {"driven_any": 2.447249212321108, "get_ui_image": 0.015910228523048194, "step_physics": 0.07390109268394676, "survival_time": 9.199999999999996, "driven_lanedir": 0.0, "get_state_dump": 0.024460293795611408, "get_robot_state": 0.003938869527868322, "sim_render-ego0": 0.003824676049722208, "get_duckie_state": 0.12339060757611248, "in-drivable-lane": 9.199999999999996, "deviation-heading": 0.0, "agent_compute-ego0": 0.0118650230201515, "complete-iteration": 0.2712481911117966, "set_robot_commands": 0.002366069845251135, "distance-from-start": 2.0868386304291437, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.009476673280870592, "sim_compute_performance-ego0": 0.001994738707671294}}
set_robot_commands_max0.002366069845251135
set_robot_commands_mean0.002366069845251135
set_robot_commands_median0.002366069845251135
set_robot_commands_min0.002366069845251135
sim_compute_performance-ego0_max0.001994738707671294
sim_compute_performance-ego0_mean0.001994738707671294
sim_compute_performance-ego0_median0.001994738707671294
sim_compute_performance-ego0_min0.001994738707671294
sim_compute_sim_state_max0.009476673280870592
sim_compute_sim_state_mean0.009476673280870592
sim_compute_sim_state_median0.009476673280870592
sim_compute_sim_state_min0.009476673280870592
sim_render-ego0_max0.003824676049722208
sim_render-ego0_mean0.003824676049722208
sim_render-ego0_median0.003824676049722208
sim_render-ego0_min0.003824676049722208
simulation-passed1
step_physics_max0.07390109268394676
step_physics_mean0.07390109268394676
step_physics_median0.07390109268394676
step_physics_min0.07390109268394676
survival_time_max9.199999999999996
survival_time_mean9.199999999999996
survival_time_median9.199999999999996
survival_time_min9.199999999999996
No reset possible
7546114887Liam Hanexercises_braitenbergmooc-BV1sim-1of5successnogpu-production-spot-0-020:02:34
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean2.1652044107470423


other stats
agent_compute-ego0_max0.011751012703807084
agent_compute-ego0_mean0.011751012703807084
agent_compute-ego0_median0.011751012703807084
agent_compute-ego0_min0.011751012703807084
complete-iteration_max0.27360651173542455
complete-iteration_mean0.27360651173542455
complete-iteration_median0.27360651173542455
complete-iteration_min0.27360651173542455
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max2.1652044107470423
distance-from-start_median2.1652044107470423
distance-from-start_min2.1652044107470423
driven_any_max2.5352144800331065
driven_any_mean2.5352144800331065
driven_any_median2.5352144800331065
driven_any_min2.5352144800331065
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.12309857619177436
get_duckie_state_mean0.12309857619177436
get_duckie_state_median0.12309857619177436
get_duckie_state_min0.12309857619177436
get_robot_state_max0.004024457685726205
get_robot_state_mean0.004024457685726205
get_robot_state_median0.004024457685726205
get_robot_state_min0.004024457685726205
get_state_dump_max0.024605758411368143
get_state_dump_mean0.024605758411368143
get_state_dump_median0.024605758411368143
get_state_dump_min0.024605758411368143
get_ui_image_max0.01633640662911012
get_ui_image_mean0.01633640662911012
get_ui_image_median0.01633640662911012
get_ui_image_min0.01633640662911012
in-drivable-lane_max9.650000000000002
in-drivable-lane_mean9.650000000000002
in-drivable-lane_median9.650000000000002
in-drivable-lane_min9.650000000000002
per-episodes
details{"d60-ego0": {"driven_any": 2.5352144800331065, "get_ui_image": 0.01633640662911012, "step_physics": 0.07606659476290044, "survival_time": 9.650000000000002, "driven_lanedir": 0.0, "get_state_dump": 0.024605758411368143, "get_robot_state": 0.004024457685726205, "sim_render-ego0": 0.003872978318597853, "get_duckie_state": 0.12309857619177436, "in-drivable-lane": 9.650000000000002, "deviation-heading": 0.0, "agent_compute-ego0": 0.011751012703807084, "complete-iteration": 0.27360651173542455, "set_robot_commands": 0.0022780477386159996, "distance-from-start": 2.1652044107470423, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.009475449925845432, "sim_compute_performance-ego0": 0.0019842337087257623}}
set_robot_commands_max0.0022780477386159996
set_robot_commands_mean0.0022780477386159996
set_robot_commands_median0.0022780477386159996
set_robot_commands_min0.0022780477386159996
sim_compute_performance-ego0_max0.0019842337087257623
sim_compute_performance-ego0_mean0.0019842337087257623
sim_compute_performance-ego0_median0.0019842337087257623
sim_compute_performance-ego0_min0.0019842337087257623
sim_compute_sim_state_max0.009475449925845432
sim_compute_sim_state_mean0.009475449925845432
sim_compute_sim_state_median0.009475449925845432
sim_compute_sim_state_min0.009475449925845432
sim_render-ego0_max0.003872978318597853
sim_render-ego0_mean0.003872978318597853
sim_render-ego0_median0.003872978318597853
sim_render-ego0_min0.003872978318597853
simulation-passed1
step_physics_max0.07606659476290044
step_physics_mean0.07606659476290044
step_physics_median0.07606659476290044
step_physics_min0.07606659476290044
survival_time_max9.650000000000002
survival_time_mean9.650000000000002
survival_time_median9.650000000000002
survival_time_min9.650000000000002
No reset possible
7545714886Liam Hanexercises_braitenbergmooc-BV1sim-2of5successnogpu-production-spot-0-020:10:05
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean1.2533530458904572


other stats
agent_compute-ego0_max0.011822003111255657
agent_compute-ego0_mean0.011822003111255657
agent_compute-ego0_median0.011822003111255657
agent_compute-ego0_min0.011822003111255657
complete-iteration_max0.2243851813348902
complete-iteration_mean0.2243851813348902
complete-iteration_median0.2243851813348902
complete-iteration_min0.2243851813348902
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max1.2533530458904572
distance-from-start_median1.2533530458904572
distance-from-start_min1.2533530458904572
driven_any_max5.168537710554688
driven_any_mean5.168537710554688
driven_any_median5.168537710554688
driven_any_min5.168537710554688
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.08306335946304613
get_duckie_state_mean0.08306335946304613
get_duckie_state_median0.08306335946304613
get_duckie_state_min0.08306335946304613
get_robot_state_max0.00392670456713979
get_robot_state_mean0.00392670456713979
get_robot_state_median0.00392670456713979
get_robot_state_min0.00392670456713979
get_state_dump_max0.017751670697646573
get_state_dump_mean0.017751670697646573
get_state_dump_median0.017751670697646573
get_state_dump_min0.017751670697646573
get_ui_image_max0.015036906926062184
get_ui_image_mean0.015036906926062184
get_ui_image_median0.015036906926062184
get_ui_image_min0.015036906926062184
in-drivable-lane_max59.99999999999873
in-drivable-lane_mean59.99999999999873
in-drivable-lane_median59.99999999999873
in-drivable-lane_min59.99999999999873
per-episodes
details{"d40-ego0": {"driven_any": 5.168537710554688, "get_ui_image": 0.015036906926062184, "step_physics": 0.07399686865762906, "survival_time": 59.99999999999873, "driven_lanedir": 0.0, "get_state_dump": 0.017751670697646573, "get_robot_state": 0.00392670456713979, "sim_render-ego0": 0.003889872966261331, "get_duckie_state": 0.08306335946304613, "in-drivable-lane": 59.99999999999873, "deviation-heading": 0.0, "agent_compute-ego0": 0.011822003111255657, "complete-iteration": 0.2243851813348902, "set_robot_commands": 0.0023622306359995415, "distance-from-start": 1.2533530458904572, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.01045961245013514, "sim_compute_performance-ego0": 0.001973200201690445}}
set_robot_commands_max0.0023622306359995415
set_robot_commands_mean0.0023622306359995415
set_robot_commands_median0.0023622306359995415
set_robot_commands_min0.0023622306359995415
sim_compute_performance-ego0_max0.001973200201690445
sim_compute_performance-ego0_mean0.001973200201690445
sim_compute_performance-ego0_median0.001973200201690445
sim_compute_performance-ego0_min0.001973200201690445
sim_compute_sim_state_max0.01045961245013514
sim_compute_sim_state_mean0.01045961245013514
sim_compute_sim_state_median0.01045961245013514
sim_compute_sim_state_min0.01045961245013514
sim_render-ego0_max0.003889872966261331
sim_render-ego0_mean0.003889872966261331
sim_render-ego0_median0.003889872966261331
sim_render-ego0_min0.003889872966261331
simulation-passed1
step_physics_max0.07399686865762906
step_physics_mean0.07399686865762906
step_physics_median0.07399686865762906
step_physics_min0.07399686865762906
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_median59.99999999999873
survival_time_min59.99999999999873
No reset possible
7545314885Allen Francisexercises_braitenbergmooc-BV1sim-1of5successnogpu-production-spot-0-020:04:13
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean1.7256809072810828


other stats
agent_compute-ego0_max0.012284498397118407
agent_compute-ego0_mean0.012284498397118407
agent_compute-ego0_median0.012284498397118407
agent_compute-ego0_min0.012284498397118407
complete-iteration_max0.27989136651565466
complete-iteration_mean0.27989136651565466
complete-iteration_median0.27989136651565466
complete-iteration_min0.27989136651565466
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max1.7256809072810828
distance-from-start_median1.7256809072810828
distance-from-start_min1.7256809072810828
driven_any_max1.7521897262145278
driven_any_mean1.7521897262145278
driven_any_median1.7521897262145278
driven_any_min1.7521897262145278
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.1250472596434296
get_duckie_state_mean0.1250472596434296
get_duckie_state_median0.1250472596434296
get_duckie_state_min0.1250472596434296
get_robot_state_max0.003980593603165423
get_robot_state_mean0.003980593603165423
get_robot_state_median0.003980593603165423
get_robot_state_min0.003980593603165423
get_state_dump_max0.024822192113907612
get_state_dump_mean0.024822192113907612
get_state_dump_median0.024822192113907612
get_state_dump_min0.024822192113907612
get_ui_image_max0.01692385425984534
get_ui_image_mean0.01692385425984534
get_ui_image_median0.01692385425984534
get_ui_image_min0.01692385425984534
in-drivable-lane_max18.250000000000124
in-drivable-lane_mean18.250000000000124
in-drivable-lane_median18.250000000000124
in-drivable-lane_min18.250000000000124
per-episodes
details{"d60-ego0": {"driven_any": 1.7521897262145278, "get_ui_image": 0.01692385425984534, "step_physics": 0.0777104500212956, "survival_time": 18.250000000000124, "driven_lanedir": 0.0, "get_state_dump": 0.024822192113907612, "get_robot_state": 0.003980593603165423, "sim_render-ego0": 0.004006707603162755, "get_duckie_state": 0.1250472596434296, "in-drivable-lane": 18.250000000000124, "deviation-heading": 0.0, "agent_compute-ego0": 0.012284498397118407, "complete-iteration": 0.27989136651565466, "set_robot_commands": 0.002333107541819088, "distance-from-start": 1.7256809072810828, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.010699698833819948, "sim_compute_performance-ego0": 0.0019831546668797892}}
set_robot_commands_max0.002333107541819088
set_robot_commands_mean0.002333107541819088
set_robot_commands_median0.002333107541819088
set_robot_commands_min0.002333107541819088
sim_compute_performance-ego0_max0.0019831546668797892
sim_compute_performance-ego0_mean0.0019831546668797892
sim_compute_performance-ego0_median0.0019831546668797892
sim_compute_performance-ego0_min0.0019831546668797892
sim_compute_sim_state_max0.010699698833819948
sim_compute_sim_state_mean0.010699698833819948
sim_compute_sim_state_median0.010699698833819948
sim_compute_sim_state_min0.010699698833819948
sim_render-ego0_max0.004006707603162755
sim_render-ego0_mean0.004006707603162755
sim_render-ego0_median0.004006707603162755
sim_render-ego0_min0.004006707603162755
simulation-passed1
step_physics_max0.0777104500212956
step_physics_mean0.0777104500212956
step_physics_median0.0777104500212956
step_physics_min0.0777104500212956
survival_time_max18.250000000000124
survival_time_mean18.250000000000124
survival_time_median18.250000000000124
survival_time_min18.250000000000124
No reset possible
7544614884Juan Ramirezexercises_braitenbergmooc-BV1sim-0of5successnogpu-production-spot-0-020:04:07
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean5.562479779754615


other stats
agent_compute-ego0_max0.011805327771936804
agent_compute-ego0_mean0.011805327771936804
agent_compute-ego0_median0.011805327771936804
agent_compute-ego0_min0.011805327771936804
complete-iteration_max0.22614071149270512
complete-iteration_mean0.22614071149270512
complete-iteration_median0.22614071149270512
complete-iteration_min0.22614071149270512
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max5.562479779754615
distance-from-start_median5.562479779754615
distance-from-start_min5.562479779754615
driven_any_max8.958683162451429
driven_any_mean8.958683162451429
driven_any_median8.958683162451429
driven_any_min8.958683162451429
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.09169259349119316
get_duckie_state_mean0.09169259349119316
get_duckie_state_median0.09169259349119316
get_duckie_state_min0.09169259349119316
get_robot_state_max0.003753744282768768
get_robot_state_mean0.003753744282768768
get_robot_state_median0.003753744282768768
get_robot_state_min0.003753744282768768
get_state_dump_max0.019176954204596364
get_state_dump_mean0.019176954204596364
get_state_dump_median0.019176954204596364
get_state_dump_min0.019176954204596364
get_ui_image_max0.014994873005209618
get_ui_image_mean0.014994873005209618
get_ui_image_median0.014994873005209618
get_ui_image_min0.014994873005209618
in-drivable-lane_max20.550000000000157
in-drivable-lane_mean20.550000000000157
in-drivable-lane_median20.550000000000157
in-drivable-lane_min20.550000000000157
per-episodes
details{"d45-ego0": {"driven_any": 8.958683162451429, "get_ui_image": 0.014994873005209618, "step_physics": 0.06745912322720278, "survival_time": 20.550000000000157, "driven_lanedir": 0.0, "get_state_dump": 0.019176954204596364, "get_robot_state": 0.003753744282768768, "sim_render-ego0": 0.0038778833972597584, "get_duckie_state": 0.09169259349119316, "in-drivable-lane": 20.550000000000157, "deviation-heading": 0.0, "agent_compute-ego0": 0.011805327771936804, "complete-iteration": 0.22614071149270512, "set_robot_commands": 0.0022454099747741107, "distance-from-start": 5.562479779754615, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.009066651747064685, "sim_compute_performance-ego0": 0.0019721377243116065}}
set_robot_commands_max0.0022454099747741107
set_robot_commands_mean0.0022454099747741107
set_robot_commands_median0.0022454099747741107
set_robot_commands_min0.0022454099747741107
sim_compute_performance-ego0_max0.0019721377243116065
sim_compute_performance-ego0_mean0.0019721377243116065
sim_compute_performance-ego0_median0.0019721377243116065
sim_compute_performance-ego0_min0.0019721377243116065
sim_compute_sim_state_max0.009066651747064685
sim_compute_sim_state_mean0.009066651747064685
sim_compute_sim_state_median0.009066651747064685
sim_compute_sim_state_min0.009066651747064685
sim_render-ego0_max0.0038778833972597584
sim_render-ego0_mean0.0038778833972597584
sim_render-ego0_median0.0038778833972597584
sim_render-ego0_min0.0038778833972597584
simulation-passed1
step_physics_max0.06745912322720278
step_physics_mean0.06745912322720278
step_physics_median0.06745912322720278
step_physics_min0.06745912322720278
survival_time_max20.550000000000157
survival_time_mean20.550000000000157
survival_time_median20.550000000000157
survival_time_min20.550000000000157
No reset possible
7544114884Juan Ramirezexercises_braitenbergmooc-BV1sim-0of5successnogpu-production-spot-0-020:02:09
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean3.0128539201388604


other stats
agent_compute-ego0_max0.012145300209522248
agent_compute-ego0_mean0.012145300209522248
agent_compute-ego0_median0.012145300209522248
agent_compute-ego0_min0.012145300209522248
complete-iteration_max0.23914123326539993
complete-iteration_mean0.23914123326539993
complete-iteration_median0.23914123326539993
complete-iteration_min0.23914123326539993
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max3.0128539201388604
distance-from-start_median3.0128539201388604
distance-from-start_min3.0128539201388604
driven_any_max3.328737997079803
driven_any_mean3.328737997079803
driven_any_median3.328737997079803
driven_any_min3.328737997079803
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.09269677251577375
get_duckie_state_mean0.09269677251577375
get_duckie_state_median0.09269677251577375
get_duckie_state_min0.09269677251577375
get_robot_state_max0.003887663781642914
get_robot_state_mean0.003887663781642914
get_robot_state_median0.003887663781642914
get_robot_state_min0.003887663781642914
get_state_dump_max0.0197058767080307
get_state_dump_mean0.0197058767080307
get_state_dump_median0.0197058767080307
get_state_dump_min0.0197058767080307
get_ui_image_max0.01551341563463211
get_ui_image_mean0.01551341563463211
get_ui_image_median0.01551341563463211
get_ui_image_min0.01551341563463211
in-drivable-lane_max7.94999999999998
in-drivable-lane_mean7.94999999999998
in-drivable-lane_median7.94999999999998
in-drivable-lane_min7.94999999999998
per-episodes
details{"d45-ego0": {"driven_any": 3.328737997079803, "get_ui_image": 0.01551341563463211, "step_physics": 0.07689507752656936, "survival_time": 7.94999999999998, "driven_lanedir": 0.0, "get_state_dump": 0.0197058767080307, "get_robot_state": 0.003887663781642914, "sim_render-ego0": 0.003938879072666168, "get_duckie_state": 0.09269677251577375, "in-drivable-lane": 7.94999999999998, "deviation-heading": 0.0, "agent_compute-ego0": 0.012145300209522248, "complete-iteration": 0.23914123326539993, "set_robot_commands": 0.00231781005859375, "distance-from-start": 3.0128539201388604, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.0099732905626297, "sim_compute_performance-ego0": 0.001963651180267334}}
set_robot_commands_max0.00231781005859375
set_robot_commands_mean0.00231781005859375
set_robot_commands_median0.00231781005859375
set_robot_commands_min0.00231781005859375
sim_compute_performance-ego0_max0.001963651180267334
sim_compute_performance-ego0_mean0.001963651180267334
sim_compute_performance-ego0_median0.001963651180267334
sim_compute_performance-ego0_min0.001963651180267334
sim_compute_sim_state_max0.0099732905626297
sim_compute_sim_state_mean0.0099732905626297
sim_compute_sim_state_median0.0099732905626297
sim_compute_sim_state_min0.0099732905626297
sim_render-ego0_max0.003938879072666168
sim_render-ego0_mean0.003938879072666168
sim_render-ego0_median0.003938879072666168
sim_render-ego0_min0.003938879072666168
simulation-passed1
step_physics_max0.07689507752656936
step_physics_mean0.07689507752656936
step_physics_median0.07689507752656936
step_physics_min0.07689507752656936
survival_time_max7.94999999999998
survival_time_mean7.94999999999998
survival_time_median7.94999999999998
survival_time_min7.94999999999998
No reset possible
7543014880fake nameexercises_braitenbergmooc-BV1sim-4of5successnogpu-production-spot-0-020:10:32
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean3.63978763696258


other stats
agent_compute-ego0_max0.011151803522483196
agent_compute-ego0_mean0.011151803522483196
agent_compute-ego0_median0.011151803522483196
agent_compute-ego0_min0.011151803522483196
complete-iteration_max0.22828435957382165
complete-iteration_mean0.22828435957382165
complete-iteration_median0.22828435957382165
complete-iteration_min0.22828435957382165
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max3.63978763696258
distance-from-start_median3.63978763696258
distance-from-start_min3.63978763696258
driven_any_max3.7122831966020815
driven_any_mean3.7122831966020815
driven_any_median3.7122831966020815
driven_any_min3.7122831966020815
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.09720581755054485
get_duckie_state_mean0.09720581755054485
get_duckie_state_median0.09720581755054485
get_duckie_state_min0.09720581755054485
get_robot_state_max0.003572592231058062
get_robot_state_mean0.003572592231058062
get_robot_state_median0.003572592231058062
get_robot_state_min0.003572592231058062
get_state_dump_max0.02000045061707
get_state_dump_mean0.02000045061707
get_state_dump_median0.02000045061707
get_state_dump_min0.02000045061707
get_ui_image_max0.014790837314107038
get_ui_image_mean0.014790837314107038
get_ui_image_median0.014790837314107038
get_ui_image_min0.014790837314107038
in-drivable-lane_max59.99999999999873
in-drivable-lane_mean59.99999999999873
in-drivable-lane_median59.99999999999873
in-drivable-lane_min59.99999999999873
per-episodes
details{"d50-ego0": {"driven_any": 3.7122831966020815, "get_ui_image": 0.014790837314107038, "step_physics": 0.06506309382226644, "survival_time": 59.99999999999873, "driven_lanedir": 0.0, "get_state_dump": 0.02000045061707, "get_robot_state": 0.003572592231058062, "sim_render-ego0": 0.003692878076774095, "get_duckie_state": 0.09720581755054485, "in-drivable-lane": 59.99999999999873, "deviation-heading": 0.0, "agent_compute-ego0": 0.011151803522483196, "complete-iteration": 0.22828435957382165, "set_robot_commands": 0.0021400904278274777, "distance-from-start": 3.63978763696258, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.008759889475610433, "sim_compute_performance-ego0": 0.0018114384564630792}}
set_robot_commands_max0.0021400904278274777
set_robot_commands_mean0.0021400904278274777
set_robot_commands_median0.0021400904278274777
set_robot_commands_min0.0021400904278274777
sim_compute_performance-ego0_max0.0018114384564630792
sim_compute_performance-ego0_mean0.0018114384564630792
sim_compute_performance-ego0_median0.0018114384564630792
sim_compute_performance-ego0_min0.0018114384564630792
sim_compute_sim_state_max0.008759889475610433
sim_compute_sim_state_mean0.008759889475610433
sim_compute_sim_state_median0.008759889475610433
sim_compute_sim_state_min0.008759889475610433
sim_render-ego0_max0.003692878076774095
sim_render-ego0_mean0.003692878076774095
sim_render-ego0_median0.003692878076774095
sim_render-ego0_min0.003692878076774095
simulation-passed1
step_physics_max0.06506309382226644
step_physics_mean0.06506309382226644
step_physics_median0.06506309382226644
step_physics_min0.06506309382226644
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_median59.99999999999873
survival_time_min59.99999999999873
No reset possible
7542114878Franz Pucherv9mooc-BV1sim-1of5successnogpu-production-spot-0-020:01:59
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean1.5789745427710256


other stats
agent_compute-ego0_max0.012072786208122008
agent_compute-ego0_mean0.012072786208122008
agent_compute-ego0_median0.012072786208122008
agent_compute-ego0_min0.012072786208122008
complete-iteration_max0.2744896181168095
complete-iteration_mean0.2744896181168095
complete-iteration_median0.2744896181168095
complete-iteration_min0.2744896181168095
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max1.5789745427710256
distance-from-start_median1.5789745427710256
distance-from-start_min1.5789745427710256
driven_any_max1.864247020081208
driven_any_mean1.864247020081208
driven_any_median1.864247020081208
driven_any_min1.864247020081208
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.12420426645586569
get_duckie_state_mean0.12420426645586569
get_duckie_state_median0.12420426645586569
get_duckie_state_min0.12420426645586569
get_robot_state_max0.003995382016704929
get_robot_state_mean0.003995382016704929
get_robot_state_median0.003995382016704929
get_robot_state_min0.003995382016704929
get_state_dump_max0.02500520598503851
get_state_dump_mean0.02500520598503851
get_state_dump_median0.02500520598503851
get_state_dump_min0.02500520598503851
get_ui_image_max0.01671318854055097
get_ui_image_mean0.01671318854055097
get_ui_image_median0.01671318854055097
get_ui_image_min0.01671318854055097
in-drivable-lane_max6.149999999999986
in-drivable-lane_mean6.149999999999986
in-drivable-lane_median6.149999999999986
in-drivable-lane_min6.149999999999986
per-episodes
details{"d60-ego0": {"driven_any": 1.864247020081208, "get_ui_image": 0.01671318854055097, "step_physics": 0.07402073183367329, "survival_time": 6.149999999999986, "driven_lanedir": 0.0, "get_state_dump": 0.02500520598503851, "get_robot_state": 0.003995382016704929, "sim_render-ego0": 0.0039433567754683955, "get_duckie_state": 0.12420426645586569, "in-drivable-lane": 6.149999999999986, "deviation-heading": 0.0, "agent_compute-ego0": 0.012072786208122008, "complete-iteration": 0.2744896181168095, "set_robot_commands": 0.0023487210273742676, "distance-from-start": 1.5789745427710256, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.010113769961941629, "sim_compute_performance-ego0": 0.0019679127200957266}}
set_robot_commands_max0.0023487210273742676
set_robot_commands_mean0.0023487210273742676
set_robot_commands_median0.0023487210273742676
set_robot_commands_min0.0023487210273742676
sim_compute_performance-ego0_max0.0019679127200957266
sim_compute_performance-ego0_mean0.0019679127200957266
sim_compute_performance-ego0_median0.0019679127200957266
sim_compute_performance-ego0_min0.0019679127200957266
sim_compute_sim_state_max0.010113769961941629
sim_compute_sim_state_mean0.010113769961941629
sim_compute_sim_state_median0.010113769961941629
sim_compute_sim_state_min0.010113769961941629
sim_render-ego0_max0.0039433567754683955
sim_render-ego0_mean0.0039433567754683955
sim_render-ego0_median0.0039433567754683955
sim_render-ego0_min0.0039433567754683955
simulation-passed1
step_physics_max0.07402073183367329
step_physics_mean0.07402073183367329
step_physics_median0.07402073183367329
step_physics_min0.07402073183367329
survival_time_max6.149999999999986
survival_time_mean6.149999999999986
survival_time_median6.149999999999986
survival_time_min6.149999999999986
No reset possible
7541914878Franz Pucherv9mooc-BV1sim-1of5successnogpu-production-spot-0-020:02:19
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean1.5790452110547963


other stats
agent_compute-ego0_max0.012055748893368627
agent_compute-ego0_mean0.012055748893368627
agent_compute-ego0_median0.012055748893368627
agent_compute-ego0_min0.012055748893368627
complete-iteration_max0.2842818210201879
complete-iteration_mean0.2842818210201879
complete-iteration_median0.2842818210201879
complete-iteration_min0.2842818210201879
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max1.5790452110547963
distance-from-start_median1.5790452110547963
distance-from-start_min1.5790452110547963
driven_any_max1.864032807833972
driven_any_mean1.864032807833972
driven_any_median1.864032807833972
driven_any_min1.864032807833972
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.12484857920677432
get_duckie_state_mean0.12484857920677432
get_duckie_state_median0.12484857920677432
get_duckie_state_min0.12484857920677432
get_robot_state_max0.004070122395792315
get_robot_state_mean0.004070122395792315
get_robot_state_median0.004070122395792315
get_robot_state_min0.004070122395792315
get_state_dump_max0.024862106769315655
get_state_dump_mean0.024862106769315655
get_state_dump_median0.024862106769315655
get_state_dump_min0.024862106769315655
get_ui_image_max0.016934210254300024
get_ui_image_mean0.016934210254300024
get_ui_image_median0.016934210254300024
get_ui_image_min0.016934210254300024
in-drivable-lane_max6.149999999999986
in-drivable-lane_mean6.149999999999986
in-drivable-lane_median6.149999999999986
in-drivable-lane_min6.149999999999986
per-episodes
details{"d60-ego0": {"driven_any": 1.864032807833972, "get_ui_image": 0.016934210254300024, "step_physics": 0.08259853432255407, "survival_time": 6.149999999999986, "driven_lanedir": 0.0, "get_state_dump": 0.024862106769315655, "get_robot_state": 0.004070122395792315, "sim_render-ego0": 0.004016845457015499, "get_duckie_state": 0.12484857920677432, "in-drivable-lane": 6.149999999999986, "deviation-heading": 0.0, "agent_compute-ego0": 0.012055748893368627, "complete-iteration": 0.2842818210201879, "set_robot_commands": 0.0023781157309009184, "distance-from-start": 1.5790452110547963, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.010362663576679846, "sim_compute_performance-ego0": 0.0020405111774321524}}
set_robot_commands_max0.0023781157309009184
set_robot_commands_mean0.0023781157309009184
set_robot_commands_median0.0023781157309009184
set_robot_commands_min0.0023781157309009184
sim_compute_performance-ego0_max0.0020405111774321524
sim_compute_performance-ego0_mean0.0020405111774321524
sim_compute_performance-ego0_median0.0020405111774321524
sim_compute_performance-ego0_min0.0020405111774321524
sim_compute_sim_state_max0.010362663576679846
sim_compute_sim_state_mean0.010362663576679846
sim_compute_sim_state_median0.010362663576679846
sim_compute_sim_state_min0.010362663576679846
sim_render-ego0_max0.004016845457015499
sim_render-ego0_mean0.004016845457015499
sim_render-ego0_median0.004016845457015499
sim_render-ego0_min0.004016845457015499
simulation-passed1
step_physics_max0.08259853432255407
step_physics_mean0.08259853432255407
step_physics_median0.08259853432255407
step_physics_min0.08259853432255407
survival_time_max6.149999999999986
survival_time_mean6.149999999999986
survival_time_median6.149999999999986
survival_time_min6.149999999999986
No reset possible
7541314877Nick Conwayexercises_braitenbergmooc-BV1sim-0of5successnogpu-production-spot-0-020:08:03
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean3.1424947267913943


other stats
agent_compute-ego0_max0.011735047534211404
agent_compute-ego0_mean0.011735047534211404
agent_compute-ego0_median0.011735047534211404
agent_compute-ego0_min0.011735047534211404
complete-iteration_max0.23737293688309383
complete-iteration_mean0.23737293688309383
complete-iteration_median0.23737293688309383
complete-iteration_min0.23737293688309383
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max3.1424947267913943
distance-from-start_median3.1424947267913943
distance-from-start_min3.1424947267913943
driven_any_max3.7751948708060534
driven_any_mean3.7751948708060534
driven_any_median3.7751948708060534
driven_any_min3.7751948708060534
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.09264982847914134
get_duckie_state_mean0.09264982847914134
get_duckie_state_median0.09264982847914134
get_duckie_state_min0.09264982847914134
get_robot_state_max0.003911237518451505
get_robot_state_mean0.003911237518451505
get_robot_state_median0.003911237518451505
get_robot_state_min0.003911237518451505
get_state_dump_max0.01967545800065884
get_state_dump_mean0.01967545800065884
get_state_dump_median0.01967545800065884
get_state_dump_min0.01967545800065884
get_ui_image_max0.015795387241636632
get_ui_image_mean0.015795387241636632
get_ui_image_median0.015795387241636632
get_ui_image_min0.015795387241636632
in-drivable-lane_max43.24999999999968
in-drivable-lane_mean43.24999999999968
in-drivable-lane_median43.24999999999968
in-drivable-lane_min43.24999999999968
per-episodes
details{"d45-ego0": {"driven_any": 3.7751948708060534, "get_ui_image": 0.015795387241636632, "step_physics": 0.07620522331695909, "survival_time": 43.24999999999968, "driven_lanedir": 0.0, "get_state_dump": 0.01967545800065884, "get_robot_state": 0.003911237518451505, "sim_render-ego0": 0.003861431161470832, "get_duckie_state": 0.09264982847914134, "in-drivable-lane": 43.24999999999968, "deviation-heading": 0.0, "agent_compute-ego0": 0.011735047534211404, "complete-iteration": 0.23737293688309383, "set_robot_commands": 0.0022967946446795385, "distance-from-start": 3.1424947267913943, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.009161924104492328, "sim_compute_performance-ego0": 0.001978137476614937}}
set_robot_commands_max0.0022967946446795385
set_robot_commands_mean0.0022967946446795385
set_robot_commands_median0.0022967946446795385
set_robot_commands_min0.0022967946446795385
sim_compute_performance-ego0_max0.001978137476614937
sim_compute_performance-ego0_mean0.001978137476614937
sim_compute_performance-ego0_median0.001978137476614937
sim_compute_performance-ego0_min0.001978137476614937
sim_compute_sim_state_max0.009161924104492328
sim_compute_sim_state_mean0.009161924104492328
sim_compute_sim_state_median0.009161924104492328
sim_compute_sim_state_min0.009161924104492328
sim_render-ego0_max0.003861431161470832
sim_render-ego0_mean0.003861431161470832
sim_render-ego0_median0.003861431161470832
sim_render-ego0_min0.003861431161470832
simulation-passed1
step_physics_max0.07620522331695909
step_physics_mean0.07620522331695909
step_physics_median0.07620522331695909
step_physics_min0.07620522331695909
survival_time_max43.24999999999968
survival_time_mean43.24999999999968
survival_time_median43.24999999999968
survival_time_min43.24999999999968
No reset possible
7540614876Marc Maitreexercises_braitenbergmooc-BV1sim-3of5successnogpu-production-spot-0-020:07:55
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean5.777775768464249


other stats
agent_compute-ego0_max0.011800561460756486
agent_compute-ego0_mean0.011800561460756486
agent_compute-ego0_median0.011800561460756486
agent_compute-ego0_min0.011800561460756486
complete-iteration_max0.195779241381153
complete-iteration_mean0.195779241381153
complete-iteration_median0.195779241381153
complete-iteration_min0.195779241381153
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max5.777775768464249
distance-from-start_median5.777775768464249
distance-from-start_min5.777775768464249
driven_any_max5.874142587468
driven_any_mean5.874142587468
driven_any_median5.874142587468
driven_any_min5.874142587468
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.06171900274292115
get_duckie_state_mean0.06171900274292115
get_duckie_state_median0.06171900274292115
get_duckie_state_min0.06171900274292115
get_robot_state_max0.003846198801071413
get_robot_state_mean0.003846198801071413
get_robot_state_median0.003846198801071413
get_robot_state_min0.003846198801071413
get_state_dump_max0.01467791199684143
get_state_dump_mean0.01467791199684143
get_state_dump_median0.01467791199684143
get_state_dump_min0.01467791199684143
get_ui_image_max0.01451252737352925
get_ui_image_mean0.01451252737352925
get_ui_image_median0.01451252737352925
get_ui_image_min0.01451252737352925
in-drivable-lane_max49.54999999999932
in-drivable-lane_mean49.54999999999932
in-drivable-lane_median49.54999999999932
in-drivable-lane_min49.54999999999932
per-episodes
details{"d30-ego0": {"driven_any": 5.874142587468, "get_ui_image": 0.01451252737352925, "step_physics": 0.07254297742920537, "survival_time": 49.54999999999932, "driven_lanedir": 0.0, "get_state_dump": 0.01467791199684143, "get_robot_state": 0.003846198801071413, "sim_render-ego0": 0.003858235814879018, "get_duckie_state": 0.06171900274292115, "in-drivable-lane": 49.54999999999932, "deviation-heading": 0.0, "agent_compute-ego0": 0.011800561460756486, "complete-iteration": 0.195779241381153, "set_robot_commands": 0.002292926994062239, "distance-from-start": 5.777775768464249, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.008479902100178504, "sim_compute_performance-ego0": 0.0019518254745391108}}
set_robot_commands_max0.002292926994062239
set_robot_commands_mean0.002292926994062239
set_robot_commands_median0.002292926994062239
set_robot_commands_min0.002292926994062239
sim_compute_performance-ego0_max0.0019518254745391108
sim_compute_performance-ego0_mean0.0019518254745391108
sim_compute_performance-ego0_median0.0019518254745391108
sim_compute_performance-ego0_min0.0019518254745391108
sim_compute_sim_state_max0.008479902100178504
sim_compute_sim_state_mean0.008479902100178504
sim_compute_sim_state_median0.008479902100178504
sim_compute_sim_state_min0.008479902100178504
sim_render-ego0_max0.003858235814879018
sim_render-ego0_mean0.003858235814879018
sim_render-ego0_median0.003858235814879018
sim_render-ego0_min0.003858235814879018
simulation-passed1
step_physics_max0.07254297742920537
step_physics_mean0.07254297742920537
step_physics_median0.07254297742920537
step_physics_min0.07254297742920537
survival_time_max49.54999999999932
survival_time_mean49.54999999999932
survival_time_median49.54999999999932
survival_time_min49.54999999999932
No reset possible
7540214875Dohyeong Kimexercises_braitenbergmooc-BV1sim-2of5successnogpu-production-spot-0-020:03:12
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean2.654768815480638


other stats
agent_compute-ego0_max0.011639888763427735
agent_compute-ego0_mean0.011639888763427735
agent_compute-ego0_median0.011639888763427735
agent_compute-ego0_min0.011639888763427735
complete-iteration_max0.2250092420578003
complete-iteration_mean0.2250092420578003
complete-iteration_median0.2250092420578003
complete-iteration_min0.2250092420578003
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max2.654768815480638
distance-from-start_median2.654768815480638
distance-from-start_min2.654768815480638
driven_any_max2.85216756964854
driven_any_mean2.85216756964854
driven_any_median2.85216756964854
driven_any_min2.85216756964854
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.0837325382232666
get_duckie_state_mean0.0837325382232666
get_duckie_state_median0.0837325382232666
get_duckie_state_min0.0837325382232666
get_robot_state_max0.0039458351135253904
get_robot_state_mean0.0039458351135253904
get_robot_state_median0.0039458351135253904
get_robot_state_min0.0039458351135253904
get_state_dump_max0.018117817878723144
get_state_dump_mean0.018117817878723144
get_state_dump_median0.018117817878723144
get_state_dump_min0.018117817878723144
get_ui_image_max0.015380861282348631
get_ui_image_mean0.015380861282348631
get_ui_image_median0.015380861282348631
get_ui_image_min0.015380861282348631
in-drivable-lane_max12.450000000000042
in-drivable-lane_mean12.450000000000042
in-drivable-lane_median12.450000000000042
in-drivable-lane_min12.450000000000042
per-episodes
details{"d40-ego0": {"driven_any": 2.85216756964854, "get_ui_image": 0.015380861282348631, "step_physics": 0.07461487579345703, "survival_time": 12.450000000000042, "driven_lanedir": 0.0, "get_state_dump": 0.018117817878723144, "get_robot_state": 0.0039458351135253904, "sim_render-ego0": 0.003860433578491211, "get_duckie_state": 0.0837325382232666, "in-drivable-lane": 12.450000000000042, "deviation-heading": 0.0, "agent_compute-ego0": 0.011639888763427735, "complete-iteration": 0.2250092420578003, "set_robot_commands": 0.0023510465621948244, "distance-from-start": 2.654768815480638, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.009273269653320312, "sim_compute_performance-ego0": 0.001988924026489258}}
set_robot_commands_max0.0023510465621948244
set_robot_commands_mean0.0023510465621948244
set_robot_commands_median0.0023510465621948244
set_robot_commands_min0.0023510465621948244
sim_compute_performance-ego0_max0.001988924026489258
sim_compute_performance-ego0_mean0.001988924026489258
sim_compute_performance-ego0_median0.001988924026489258
sim_compute_performance-ego0_min0.001988924026489258
sim_compute_sim_state_max0.009273269653320312
sim_compute_sim_state_mean0.009273269653320312
sim_compute_sim_state_median0.009273269653320312
sim_compute_sim_state_min0.009273269653320312
sim_render-ego0_max0.003860433578491211
sim_render-ego0_mean0.003860433578491211
sim_render-ego0_median0.003860433578491211
sim_render-ego0_min0.003860433578491211
simulation-passed1
step_physics_max0.07461487579345703
step_physics_mean0.07461487579345703
step_physics_median0.07461487579345703
step_physics_min0.07461487579345703
survival_time_max12.450000000000042
survival_time_mean12.450000000000042
survival_time_median12.450000000000042
survival_time_min12.450000000000042
No reset possible
7539614873Marcus Ongexercises_braitenbergmooc-BV1sim-3of5successnogpu-production-spot-0-020:03:17
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean1.006522091986853


other stats
agent_compute-ego0_max0.011748195095897125
agent_compute-ego0_mean0.011748195095897125
agent_compute-ego0_median0.011748195095897125
agent_compute-ego0_min0.011748195095897125
complete-iteration_max0.1972786392828431
complete-iteration_mean0.1972786392828431
complete-iteration_median0.1972786392828431
complete-iteration_min0.1972786392828431
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max1.006522091986853
distance-from-start_median1.006522091986853
distance-from-start_min1.006522091986853
driven_any_max1.2371195883656694
driven_any_mean1.2371195883656694
driven_any_median1.2371195883656694
driven_any_min1.2371195883656694
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.06153098742167155
get_duckie_state_mean0.06153098742167155
get_duckie_state_median0.06153098742167155
get_duckie_state_min0.06153098742167155
get_robot_state_max0.003810026830294317
get_robot_state_mean0.003810026830294317
get_robot_state_median0.003810026830294317
get_robot_state_min0.003810026830294317
get_state_dump_max0.014542977817933568
get_state_dump_mean0.014542977817933568
get_state_dump_median0.014542977817933568
get_state_dump_min0.014542977817933568
get_ui_image_max0.014591513258038143
get_ui_image_mean0.014591513258038143
get_ui_image_median0.014591513258038143
get_ui_image_min0.014591513258038143
in-drivable-lane_max14.800000000000075
in-drivable-lane_mean14.800000000000075
in-drivable-lane_median14.800000000000075
in-drivable-lane_min14.800000000000075
per-episodes
details{"d30-ego0": {"driven_any": 1.2371195883656694, "get_ui_image": 0.014591513258038143, "step_physics": 0.0723784307036737, "survival_time": 14.800000000000075, "driven_lanedir": 0.0, "get_state_dump": 0.014542977817933568, "get_robot_state": 0.003810026830294317, "sim_render-ego0": 0.003906448280771172, "get_duckie_state": 0.06153098742167155, "in-drivable-lane": 14.800000000000075, "deviation-heading": 0.0, "agent_compute-ego0": 0.011748195095897125, "complete-iteration": 0.1972786392828431, "set_robot_commands": 0.002335732232039223, "distance-from-start": 1.006522091986853, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.01037119053028248, "sim_compute_performance-ego0": 0.0019653766644924177}}
set_robot_commands_max0.002335732232039223
set_robot_commands_mean0.002335732232039223
set_robot_commands_median0.002335732232039223
set_robot_commands_min0.002335732232039223
sim_compute_performance-ego0_max0.0019653766644924177
sim_compute_performance-ego0_mean0.0019653766644924177
sim_compute_performance-ego0_median0.0019653766644924177
sim_compute_performance-ego0_min0.0019653766644924177
sim_compute_sim_state_max0.01037119053028248
sim_compute_sim_state_mean0.01037119053028248
sim_compute_sim_state_median0.01037119053028248
sim_compute_sim_state_min0.01037119053028248
sim_render-ego0_max0.003906448280771172
sim_render-ego0_mean0.003906448280771172
sim_render-ego0_median0.003906448280771172
sim_render-ego0_min0.003906448280771172
simulation-passed1
step_physics_max0.0723784307036737
step_physics_mean0.0723784307036737
step_physics_median0.0723784307036737
step_physics_min0.0723784307036737
survival_time_max14.800000000000075
survival_time_mean14.800000000000075
survival_time_median14.800000000000075
survival_time_min14.800000000000075
No reset possible
7538914872Bruno Maitreexercises_braitenbergmooc-BV1sim-4of5successnogpu-production-spot-0-020:06:00
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean5.9443689798753585


other stats
agent_compute-ego0_max0.013367760607174465
agent_compute-ego0_mean0.013367760607174465
agent_compute-ego0_median0.013367760607174465
agent_compute-ego0_min0.013367760607174465
complete-iteration_max0.2562017551490239
complete-iteration_mean0.2562017551490239
complete-iteration_median0.2562017551490239
complete-iteration_min0.2562017551490239
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max5.9443689798753585
distance-from-start_median5.9443689798753585
distance-from-start_min5.9443689798753585
driven_any_max5.988389547910168
driven_any_mean5.988389547910168
driven_any_median5.988389547910168
driven_any_min5.988389547910168
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.10798523255756924
get_duckie_state_mean0.10798523255756924
get_duckie_state_median0.10798523255756924
get_duckie_state_min0.10798523255756924
get_robot_state_max0.004060880201203483
get_robot_state_mean0.004060880201203483
get_robot_state_median0.004060880201203483
get_robot_state_min0.004060880201203483
get_state_dump_max0.021744502016476226
get_state_dump_mean0.021744502016476226
get_state_dump_median0.021744502016476226
get_state_dump_min0.021744502016476226
get_ui_image_max0.01598720507962363
get_ui_image_mean0.01598720507962363
get_ui_image_median0.01598720507962363
get_ui_image_min0.01598720507962363
in-drivable-lane_max27.950000000000266
in-drivable-lane_mean27.950000000000266
in-drivable-lane_median27.950000000000266
in-drivable-lane_min27.950000000000266
per-episodes
details{"d50-ego0": {"driven_any": 5.988389547910168, "get_ui_image": 0.01598720507962363, "step_physics": 0.07499084430081504, "survival_time": 27.950000000000266, "driven_lanedir": 0.0, "get_state_dump": 0.021744502016476226, "get_robot_state": 0.004060880201203483, "sim_render-ego0": 0.003964498213359288, "get_duckie_state": 0.10798523255756924, "in-drivable-lane": 27.950000000000266, "deviation-heading": 0.0, "agent_compute-ego0": 0.013367760607174465, "complete-iteration": 0.2562017551490239, "set_robot_commands": 0.0024334047521863667, "distance-from-start": 5.9443689798753585, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.009520341668810163, "sim_compute_performance-ego0": 0.0020499723298209056}}
set_robot_commands_max0.0024334047521863667
set_robot_commands_mean0.0024334047521863667
set_robot_commands_median0.0024334047521863667
set_robot_commands_min0.0024334047521863667
sim_compute_performance-ego0_max0.0020499723298209056
sim_compute_performance-ego0_mean0.0020499723298209056
sim_compute_performance-ego0_median0.0020499723298209056
sim_compute_performance-ego0_min0.0020499723298209056
sim_compute_sim_state_max0.009520341668810163
sim_compute_sim_state_mean0.009520341668810163
sim_compute_sim_state_median0.009520341668810163
sim_compute_sim_state_min0.009520341668810163
sim_render-ego0_max0.003964498213359288
sim_render-ego0_mean0.003964498213359288
sim_render-ego0_median0.003964498213359288
sim_render-ego0_min0.003964498213359288
simulation-passed1
step_physics_max0.07499084430081504
step_physics_mean0.07499084430081504
step_physics_median0.07499084430081504
step_physics_min0.07499084430081504
survival_time_max27.950000000000266
survival_time_mean27.950000000000266
survival_time_median27.950000000000266
survival_time_min27.950000000000266
No reset possible
7538614871Andrew Fletcherexercises_braitenbergmooc-BV1sim-1of5successnogpu-production-spot-0-020:12:01
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean5.141425367676954


other stats
agent_compute-ego0_max0.011930985017978185
agent_compute-ego0_mean0.011930985017978185
agent_compute-ego0_median0.011930985017978185
agent_compute-ego0_min0.011930985017978185
complete-iteration_max0.2775536846062424
complete-iteration_mean0.2775536846062424
complete-iteration_median0.2775536846062424
complete-iteration_min0.2775536846062424
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max5.141425367676954
distance-from-start_median5.141425367676954
distance-from-start_min5.141425367676954
driven_any_max5.617201511920723
driven_any_mean5.617201511920723
driven_any_median5.617201511920723
driven_any_min5.617201511920723
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.1274060555838427
get_duckie_state_mean0.1274060555838427
get_duckie_state_median0.1274060555838427
get_duckie_state_min0.1274060555838427
get_robot_state_max0.004008154984219287
get_robot_state_mean0.004008154984219287
get_robot_state_median0.004008154984219287
get_robot_state_min0.004008154984219287
get_state_dump_max0.02414263932532216
get_state_dump_mean0.02414263932532216
get_state_dump_median0.02414263932532216
get_state_dump_min0.02414263932532216
get_ui_image_max0.016438458186204388
get_ui_image_mean0.016438458186204388
get_ui_image_median0.016438458186204388
get_ui_image_min0.016438458186204388
in-drivable-lane_max59.99999999999873
in-drivable-lane_mean59.99999999999873
in-drivable-lane_median59.99999999999873
in-drivable-lane_min59.99999999999873
per-episodes
details{"d60-ego0": {"driven_any": 5.617201511920723, "get_ui_image": 0.016438458186204388, "step_physics": 0.07542846840089008, "survival_time": 59.99999999999873, "driven_lanedir": 0.0, "get_state_dump": 0.02414263932532216, "get_robot_state": 0.004008154984219287, "sim_render-ego0": 0.003947520236190809, "get_duckie_state": 0.1274060555838427, "in-drivable-lane": 59.99999999999873, "deviation-heading": 0.0, "agent_compute-ego0": 0.011930985017978185, "complete-iteration": 0.2775536846062424, "set_robot_commands": 0.0023549430872578108, "distance-from-start": 5.141425367676954, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.00977585019914435, "sim_compute_performance-ego0": 0.0020213317712280375}}
set_robot_commands_max0.0023549430872578108
set_robot_commands_mean0.0023549430872578108
set_robot_commands_median0.0023549430872578108
set_robot_commands_min0.0023549430872578108
sim_compute_performance-ego0_max0.0020213317712280375
sim_compute_performance-ego0_mean0.0020213317712280375
sim_compute_performance-ego0_median0.0020213317712280375
sim_compute_performance-ego0_min0.0020213317712280375
sim_compute_sim_state_max0.00977585019914435
sim_compute_sim_state_mean0.00977585019914435
sim_compute_sim_state_median0.00977585019914435
sim_compute_sim_state_min0.00977585019914435
sim_render-ego0_max0.003947520236190809
sim_render-ego0_mean0.003947520236190809
sim_render-ego0_median0.003947520236190809
sim_render-ego0_min0.003947520236190809
simulation-passed1
step_physics_max0.07542846840089008
step_physics_mean0.07542846840089008
step_physics_median0.07542846840089008
step_physics_min0.07542846840089008
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_median59.99999999999873
survival_time_min59.99999999999873
No reset possible
7538113686Anthony CourchesneΒ πŸ‡¨πŸ‡¦Real100FHaido-LF-sim-validationsim-2of4successnogpu-production-spot-0-020:02:11
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median1.2710566632094191
survival_time_median11.700000000000031
deviation-center-line_median0.5672857025046473
in-drivable-lane_median4.850000000000005


other stats
agent_compute-ego0_max0.040948822143230035
agent_compute-ego0_mean0.040948822143230035
agent_compute-ego0_median0.040948822143230035
agent_compute-ego0_min0.040948822143230035
complete-iteration_max0.15950641733534793
complete-iteration_mean0.15950641733534793
complete-iteration_median0.15950641733534793
complete-iteration_min0.15950641733534793
deviation-center-line_max0.5672857025046473
deviation-center-line_mean0.5672857025046473
deviation-center-line_min0.5672857025046473
deviation-heading_max2.605535282243672
deviation-heading_mean2.605535282243672
deviation-heading_median2.605535282243672
deviation-heading_min2.605535282243672
distance-from-start_max0.9686301448193164
distance-from-start_mean0.9686301448193164
distance-from-start_median0.9686301448193164
distance-from-start_min0.9686301448193164
driven_any_max2.22510085625651
driven_any_mean2.22510085625651
driven_any_median2.22510085625651
driven_any_min2.22510085625651
driven_lanedir_consec_max1.2710566632094191
driven_lanedir_consec_mean1.2710566632094191
driven_lanedir_consec_min1.2710566632094191
driven_lanedir_max1.2710566632094191
driven_lanedir_mean1.2710566632094191
driven_lanedir_median1.2710566632094191
driven_lanedir_min1.2710566632094191
get_duckie_state_max1.1271618782205784e-06
get_duckie_state_mean1.1271618782205784e-06
get_duckie_state_median1.1271618782205784e-06
get_duckie_state_min1.1271618782205784e-06
get_robot_state_max0.0034781922685339097
get_robot_state_mean0.0034781922685339097
get_robot_state_median0.0034781922685339097
get_robot_state_min0.0034781922685339097
get_state_dump_max0.004417160724071746
get_state_dump_mean0.004417160724071746
get_state_dump_median0.004417160724071746
get_state_dump_min0.004417160724071746
get_ui_image_max0.017286691259830556
get_ui_image_mean0.017286691259830556
get_ui_image_median0.017286691259830556
get_ui_image_min0.017286691259830556
in-drivable-lane_max4.850000000000005
in-drivable-lane_mean4.850000000000005
in-drivable-lane_min4.850000000000005
per-episodes
details{"LF-norm-small_loop-000-ego0": {"driven_any": 2.22510085625651, "get_ui_image": 0.017286691259830556, "step_physics": 0.08057736335916722, "survival_time": 11.700000000000031, "driven_lanedir": 1.2710566632094191, "get_state_dump": 0.004417160724071746, "get_robot_state": 0.0034781922685339097, "sim_render-ego0": 0.0036991870149653007, "get_duckie_state": 1.1271618782205784e-06, "in-drivable-lane": 4.850000000000005, "deviation-heading": 2.605535282243672, "agent_compute-ego0": 0.040948822143230035, "complete-iteration": 0.15950641733534793, "set_robot_commands": 0.002126661260077294, "distance-from-start": 0.9686301448193164, "deviation-center-line": 0.5672857025046473, "driven_lanedir_consec": 1.2710566632094191, "sim_compute_sim_state": 0.0050805588986011264, "sim_compute_performance-ego0": 0.0018166886999251995}}
set_robot_commands_max0.002126661260077294
set_robot_commands_mean0.002126661260077294
set_robot_commands_median0.002126661260077294
set_robot_commands_min0.002126661260077294
sim_compute_performance-ego0_max0.0018166886999251995
sim_compute_performance-ego0_mean0.0018166886999251995
sim_compute_performance-ego0_median0.0018166886999251995
sim_compute_performance-ego0_min0.0018166886999251995
sim_compute_sim_state_max0.0050805588986011264
sim_compute_sim_state_mean0.0050805588986011264
sim_compute_sim_state_median0.0050805588986011264
sim_compute_sim_state_min0.0050805588986011264
sim_render-ego0_max0.0036991870149653007
sim_render-ego0_mean0.0036991870149653007
sim_render-ego0_median0.0036991870149653007
sim_render-ego0_min0.0036991870149653007
simulation-passed1
step_physics_max0.08057736335916722
step_physics_mean0.08057736335916722
step_physics_median0.08057736335916722
step_physics_min0.08057736335916722
survival_time_max11.700000000000031
survival_time_mean11.700000000000031
survival_time_min11.700000000000031
No reset possible
7537913686Anthony CourchesneΒ πŸ‡¨πŸ‡¦Real100FHaido-LF-sim-validationsim-3of4successnogpu-production-spot-0-020:02:10
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.867927017264546
survival_time_median8.399999999999984
deviation-center-line_median0.44918036227870695
in-drivable-lane_median2.9499999999999957


other stats
agent_compute-ego0_max0.05074131700414172
agent_compute-ego0_mean0.05074131700414172
agent_compute-ego0_median0.05074131700414172
agent_compute-ego0_min0.05074131700414172
complete-iteration_max0.20931039900469356
complete-iteration_mean0.20931039900469356
complete-iteration_median0.20931039900469356
complete-iteration_min0.20931039900469356
deviation-center-line_max0.44918036227870695
deviation-center-line_mean0.44918036227870695
deviation-center-line_min0.44918036227870695
deviation-heading_max2.6793718857683375
deviation-heading_mean2.6793718857683375
deviation-heading_median2.6793718857683375
deviation-heading_min2.6793718857683375
distance-from-start_max1.3146938181600147
distance-from-start_mean1.3146938181600147
distance-from-start_median1.3146938181600147
distance-from-start_min1.3146938181600147
driven_any_max1.5120144826958442
driven_any_mean1.5120144826958442
driven_any_median1.5120144826958442
driven_any_min1.5120144826958442
driven_lanedir_consec_max0.867927017264546
driven_lanedir_consec_mean0.867927017264546
driven_lanedir_consec_min0.867927017264546
driven_lanedir_max0.867927017264546
driven_lanedir_mean0.867927017264546
driven_lanedir_median0.867927017264546
driven_lanedir_min0.867927017264546
get_duckie_state_max1.4008854973245657e-06
get_duckie_state_mean1.4008854973245657e-06
get_duckie_state_median1.4008854973245657e-06
get_duckie_state_min1.4008854973245657e-06
get_robot_state_max0.003834754052246816
get_robot_state_mean0.003834754052246816
get_robot_state_median0.003834754052246816
get_robot_state_min0.003834754052246816
get_state_dump_max0.004792711438511956
get_state_dump_mean0.004792711438511956
get_state_dump_median0.004792711438511956
get_state_dump_min0.004792711438511956
get_ui_image_max0.025504606009940423
get_ui_image_mean0.025504606009940423
get_ui_image_median0.025504606009940423
get_ui_image_min0.025504606009940423
in-drivable-lane_max2.9499999999999957
in-drivable-lane_mean2.9499999999999957
in-drivable-lane_min2.9499999999999957
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 1.5120144826958442, "get_ui_image": 0.025504606009940423, "step_physics": 0.10635342823683155, "survival_time": 8.399999999999984, "driven_lanedir": 0.867927017264546, "get_state_dump": 0.004792711438511956, "get_robot_state": 0.003834754052246816, "sim_render-ego0": 0.004099321083204281, "get_duckie_state": 1.4008854973245657e-06, "in-drivable-lane": 2.9499999999999957, "deviation-heading": 2.6793718857683375, "agent_compute-ego0": 0.05074131700414172, "complete-iteration": 0.20931039900469356, "set_robot_commands": 0.0023453249733828936, "distance-from-start": 1.3146938181600147, "deviation-center-line": 0.44918036227870695, "driven_lanedir_consec": 0.867927017264546, "sim_compute_sim_state": 0.009584393021623058, "sim_compute_performance-ego0": 0.001960833397137343}}
set_robot_commands_max0.0023453249733828936
set_robot_commands_mean0.0023453249733828936
set_robot_commands_median0.0023453249733828936
set_robot_commands_min0.0023453249733828936
sim_compute_performance-ego0_max0.001960833397137343
sim_compute_performance-ego0_mean0.001960833397137343
sim_compute_performance-ego0_median0.001960833397137343
sim_compute_performance-ego0_min0.001960833397137343
sim_compute_sim_state_max0.009584393021623058
sim_compute_sim_state_mean0.009584393021623058
sim_compute_sim_state_median0.009584393021623058
sim_compute_sim_state_min0.009584393021623058
sim_render-ego0_max0.004099321083204281
sim_render-ego0_mean0.004099321083204281
sim_render-ego0_median0.004099321083204281
sim_render-ego0_min0.004099321083204281
simulation-passed1
step_physics_max0.10635342823683155
step_physics_mean0.10635342823683155
step_physics_median0.10635342823683155
step_physics_min0.10635342823683155
survival_time_max8.399999999999984
survival_time_mean8.399999999999984
survival_time_min8.399999999999984
No reset possible
7537713686Anthony CourchesneΒ πŸ‡¨πŸ‡¦Real100FHaido-LF-sim-validationsim-3of4successnogpu-production-spot-0-020:02:28
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.8589203139394191
survival_time_median8.299999999999983
deviation-center-line_median0.4641820322230139
in-drivable-lane_median2.7999999999999945


other stats
agent_compute-ego0_max0.05263170653474545
agent_compute-ego0_mean0.05263170653474545
agent_compute-ego0_median0.05263170653474545
agent_compute-ego0_min0.05263170653474545
complete-iteration_max0.23310750829959345
complete-iteration_mean0.23310750829959345
complete-iteration_median0.23310750829959345
complete-iteration_min0.23310750829959345
deviation-center-line_max0.4641820322230139
deviation-center-line_mean0.4641820322230139
deviation-center-line_min0.4641820322230139
deviation-heading_max2.763737143818888
deviation-heading_mean2.763737143818888
deviation-heading_median2.763737143818888
deviation-heading_min2.763737143818888
distance-from-start_max1.27796341463992
distance-from-start_mean1.27796341463992
distance-from-start_median1.27796341463992
distance-from-start_min1.27796341463992
driven_any_max1.4757999525584504
driven_any_mean1.4757999525584504
driven_any_median1.4757999525584504
driven_any_min1.4757999525584504
driven_lanedir_consec_max0.8589203139394191
driven_lanedir_consec_mean0.8589203139394191
driven_lanedir_consec_min0.8589203139394191
driven_lanedir_max0.8589203139394191
driven_lanedir_mean0.8589203139394191
driven_lanedir_median0.8589203139394191
driven_lanedir_min0.8589203139394191
get_duckie_state_max1.6318109934915323e-06
get_duckie_state_mean1.6318109934915323e-06
get_duckie_state_median1.6318109934915323e-06
get_duckie_state_min1.6318109934915323e-06
get_robot_state_max0.004139977300952295
get_robot_state_mean0.004139977300952295
get_robot_state_median0.004139977300952295
get_robot_state_min0.004139977300952295
get_state_dump_max0.005328749468226633
get_state_dump_mean0.005328749468226633
get_state_dump_median0.005328749468226633
get_state_dump_min0.005328749468226633
get_ui_image_max0.026910413525061693
get_ui_image_mean0.026910413525061693
get_ui_image_median0.026910413525061693
get_ui_image_min0.026910413525061693
in-drivable-lane_max2.7999999999999945
in-drivable-lane_mean2.7999999999999945
in-drivable-lane_min2.7999999999999945
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 1.4757999525584504, "get_ui_image": 0.026910413525061693, "step_physics": 0.12420847601519372, "survival_time": 8.299999999999983, "driven_lanedir": 0.8589203139394191, "get_state_dump": 0.005328749468226633, "get_robot_state": 0.004139977300952295, "sim_render-ego0": 0.00440635938130453, "get_duckie_state": 1.6318109934915323e-06, "in-drivable-lane": 2.7999999999999945, "deviation-heading": 2.763737143818888, "agent_compute-ego0": 0.05263170653474545, "complete-iteration": 0.23310750829959345, "set_robot_commands": 0.002638192947753175, "distance-from-start": 1.27796341463992, "deviation-center-line": 0.4641820322230139, "driven_lanedir_consec": 0.8589203139394191, "sim_compute_sim_state": 0.010504371391798922, "sim_compute_performance-ego0": 0.0022385391646516537}}
set_robot_commands_max0.002638192947753175
set_robot_commands_mean0.002638192947753175
set_robot_commands_median0.002638192947753175
set_robot_commands_min0.002638192947753175
sim_compute_performance-ego0_max0.0022385391646516537
sim_compute_performance-ego0_mean0.0022385391646516537
sim_compute_performance-ego0_median0.0022385391646516537
sim_compute_performance-ego0_min0.0022385391646516537
sim_compute_sim_state_max0.010504371391798922
sim_compute_sim_state_mean0.010504371391798922
sim_compute_sim_state_median0.010504371391798922
sim_compute_sim_state_min0.010504371391798922
sim_render-ego0_max0.00440635938130453
sim_render-ego0_mean0.00440635938130453
sim_render-ego0_median0.00440635938130453
sim_render-ego0_min0.00440635938130453
simulation-passed1
step_physics_max0.12420847601519372
step_physics_mean0.12420847601519372
step_physics_median0.12420847601519372
step_physics_min0.12420847601519372
survival_time_max8.299999999999983
survival_time_mean8.299999999999983
survival_time_min8.299999999999983
No reset possible
7537213692Samuel Alexandertemplate-pytorchaido-LF-sim-validationsim-2of4successnogpu-production-spot-0-020:02:12
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median1.1337637568655683
survival_time_median10.700000000000015
deviation-center-line_median0.5298510033241914
in-drivable-lane_median4.9000000000000075


other stats
agent_compute-ego0_max0.014394021588702536
agent_compute-ego0_mean0.014394021588702536
agent_compute-ego0_median0.014394021588702536
agent_compute-ego0_min0.014394021588702536
complete-iteration_max0.1541336547496707
complete-iteration_mean0.1541336547496707
complete-iteration_median0.1541336547496707
complete-iteration_min0.1541336547496707
deviation-center-line_max0.5298510033241914
deviation-center-line_mean0.5298510033241914
deviation-center-line_min0.5298510033241914
deviation-heading_max3.0322222707554367
deviation-heading_mean3.0322222707554367
deviation-heading_median3.0322222707554367
deviation-heading_min3.0322222707554367
distance-from-start_max1.297194786458104
distance-from-start_mean1.297194786458104
distance-from-start_median1.297194786458104
distance-from-start_min1.297194786458104
driven_any_max2.434829201665317
driven_any_mean2.434829201665317
driven_any_median2.434829201665317
driven_any_min2.434829201665317
driven_lanedir_consec_max1.1337637568655683
driven_lanedir_consec_mean1.1337637568655683
driven_lanedir_consec_min1.1337637568655683
driven_lanedir_max1.1337637568655683
driven_lanedir_mean1.1337637568655683
driven_lanedir_median1.1337637568655683
driven_lanedir_min1.1337637568655683
get_duckie_state_max1.5824340110601382e-06
get_duckie_state_mean1.5824340110601382e-06
get_duckie_state_median1.5824340110601382e-06
get_duckie_state_min1.5824340110601382e-06
get_robot_state_max0.00409614319025084
get_robot_state_mean0.00409614319025084
get_robot_state_median0.00409614319025084
get_robot_state_min0.00409614319025084
get_state_dump_max0.005188642546188
get_state_dump_mean0.005188642546188
get_state_dump_median0.005188642546188
get_state_dump_min0.005188642546188
get_ui_image_max0.018718565342038175
get_ui_image_mean0.018718565342038175
get_ui_image_median0.018718565342038175
get_ui_image_min0.018718565342038175
in-drivable-lane_max4.9000000000000075
in-drivable-lane_mean4.9000000000000075
in-drivable-lane_min4.9000000000000075
per-episodes
details{"LF-norm-small_loop-000-ego0": {"driven_any": 2.434829201665317, "get_ui_image": 0.018718565342038175, "step_physics": 0.09651129855666048, "survival_time": 10.700000000000015, "driven_lanedir": 1.1337637568655683, "get_state_dump": 0.005188642546188, "get_robot_state": 0.00409614319025084, "sim_render-ego0": 0.004285540691641874, "get_duckie_state": 1.5824340110601382e-06, "in-drivable-lane": 4.9000000000000075, "deviation-heading": 3.0322222707554367, "agent_compute-ego0": 0.014394021588702536, "complete-iteration": 0.1541336547496707, "set_robot_commands": 0.002592822008354719, "distance-from-start": 1.297194786458104, "deviation-center-line": 0.5298510033241914, "driven_lanedir_consec": 1.1337637568655683, "sim_compute_sim_state": 0.006053422218145326, "sim_compute_performance-ego0": 0.00219892235689385}}
set_robot_commands_max0.002592822008354719
set_robot_commands_mean0.002592822008354719
set_robot_commands_median0.002592822008354719
set_robot_commands_min0.002592822008354719
sim_compute_performance-ego0_max0.00219892235689385
sim_compute_performance-ego0_mean0.00219892235689385
sim_compute_performance-ego0_median0.00219892235689385
sim_compute_performance-ego0_min0.00219892235689385
sim_compute_sim_state_max0.006053422218145326
sim_compute_sim_state_mean0.006053422218145326
sim_compute_sim_state_median0.006053422218145326
sim_compute_sim_state_min0.006053422218145326
sim_render-ego0_max0.004285540691641874
sim_render-ego0_mean0.004285540691641874
sim_render-ego0_median0.004285540691641874
sim_render-ego0_min0.004285540691641874
simulation-passed1
step_physics_max0.09651129855666048
step_physics_mean0.09651129855666048
step_physics_median0.09651129855666048
step_physics_min0.09651129855666048
survival_time_max10.700000000000015
survival_time_mean10.700000000000015
survival_time_min10.700000000000015
No reset possible
7536813694Samuel Alexandertemplate-tensorflowaido-LF-sim-validationsim-1of4successnogpu-production-spot-0-020:02:11
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.924569489978571
survival_time_median9.6
deviation-center-line_median0.2578858862532039
in-drivable-lane_median6.950000000000006


other stats
agent_compute-ego0_max0.025146537494165292
agent_compute-ego0_mean0.025146537494165292
agent_compute-ego0_median0.025146537494165292
agent_compute-ego0_min0.025146537494165292
complete-iteration_max0.2069157059328544
complete-iteration_mean0.2069157059328544
complete-iteration_median0.2069157059328544
complete-iteration_min0.2069157059328544
deviation-center-line_max0.2578858862532039
deviation-center-line_mean0.2578858862532039
deviation-center-line_min0.2578858862532039
deviation-heading_max1.19202809461615
deviation-heading_mean1.19202809461615
deviation-heading_median1.19202809461615
deviation-heading_min1.19202809461615
distance-from-start_max1.3854782617359689
distance-from-start_mean1.3854782617359689
distance-from-start_median1.3854782617359689
distance-from-start_min1.3854782617359689
driven_any_max3.668055390527027
driven_any_mean3.668055390527027
driven_any_median3.668055390527027
driven_any_min3.668055390527027
driven_lanedir_consec_max0.924569489978571
driven_lanedir_consec_mean0.924569489978571
driven_lanedir_consec_min0.924569489978571
driven_lanedir_max0.924569489978571
driven_lanedir_mean0.924569489978571
driven_lanedir_median0.924569489978571
driven_lanedir_min0.924569489978571
get_duckie_state_max1.3625683562125566e-06
get_duckie_state_mean1.3625683562125566e-06
get_duckie_state_median1.3625683562125566e-06
get_duckie_state_min1.3625683562125566e-06
get_robot_state_max0.003790912232868412
get_robot_state_mean0.003790912232868412
get_robot_state_median0.003790912232868412
get_robot_state_min0.003790912232868412
get_state_dump_max0.004837836626280157
get_state_dump_mean0.004837836626280157
get_state_dump_median0.004837836626280157
get_state_dump_min0.004837836626280157
get_ui_image_max0.02320618950641217
get_ui_image_mean0.02320618950641217
get_ui_image_median0.02320618950641217
get_ui_image_min0.02320618950641217
in-drivable-lane_max6.950000000000006
in-drivable-lane_mean6.950000000000006
in-drivable-lane_min6.950000000000006
per-episodes
details{"LF-norm-techtrack-000-ego0": {"driven_any": 3.668055390527027, "get_ui_image": 0.02320618950641217, "step_physics": 0.13197524312864314, "survival_time": 9.6, "driven_lanedir": 0.924569489978571, "get_state_dump": 0.004837836626280157, "get_robot_state": 0.003790912232868412, "sim_render-ego0": 0.004133711207098294, "get_duckie_state": 1.3625683562125566e-06, "in-drivable-lane": 6.950000000000006, "deviation-heading": 1.19202809461615, "agent_compute-ego0": 0.025146537494165292, "complete-iteration": 0.2069157059328544, "set_robot_commands": 0.0023148059844970703, "distance-from-start": 1.3854782617359689, "deviation-center-line": 0.2578858862532039, "driven_lanedir_consec": 0.924569489978571, "sim_compute_sim_state": 0.009336549383370987, "sim_compute_performance-ego0": 0.0020823021626843073}}
set_robot_commands_max0.0023148059844970703
set_robot_commands_mean0.0023148059844970703
set_robot_commands_median0.0023148059844970703
set_robot_commands_min0.0023148059844970703
sim_compute_performance-ego0_max0.0020823021626843073
sim_compute_performance-ego0_mean0.0020823021626843073
sim_compute_performance-ego0_median0.0020823021626843073
sim_compute_performance-ego0_min0.0020823021626843073
sim_compute_sim_state_max0.009336549383370987
sim_compute_sim_state_mean0.009336549383370987
sim_compute_sim_state_median0.009336549383370987
sim_compute_sim_state_min0.009336549383370987
sim_render-ego0_max0.004133711207098294
sim_render-ego0_mean0.004133711207098294
sim_render-ego0_median0.004133711207098294
sim_render-ego0_min0.004133711207098294
simulation-passed1
step_physics_max0.13197524312864314
step_physics_mean0.13197524312864314
step_physics_median0.13197524312864314
step_physics_min0.13197524312864314
survival_time_max9.6
survival_time_mean9.6
survival_time_min9.6
No reset possible
7536413694Samuel Alexandertemplate-tensorflowaido-LF-sim-validationsim-1of4successnogpu-production-spot-0-020:01:28
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.2973791766831426
survival_time_median3.599999999999995
deviation-center-line_median0.0882049626628675
in-drivable-lane_median2.649999999999995


other stats
agent_compute-ego0_max0.04699079957726884
agent_compute-ego0_mean0.04699079957726884
agent_compute-ego0_median0.04699079957726884
agent_compute-ego0_min0.04699079957726884
complete-iteration_max0.20526205023674116
complete-iteration_mean0.20526205023674116
complete-iteration_median0.20526205023674116
complete-iteration_min0.20526205023674116
deviation-center-line_max0.0882049626628675
deviation-center-line_mean0.0882049626628675
deviation-center-line_min0.0882049626628675
deviation-heading_max0.24440125915023547
deviation-heading_mean0.24440125915023547
deviation-heading_median0.24440125915023547
deviation-heading_min0.24440125915023547
distance-from-start_max0.95660253274137
distance-from-start_mean0.95660253274137
distance-from-start_median0.95660253274137
distance-from-start_min0.95660253274137
driven_any_max0.9918003897860772
driven_any_mean0.9918003897860772
driven_any_median0.9918003897860772
driven_any_min0.9918003897860772
driven_lanedir_consec_max0.2973791766831426
driven_lanedir_consec_mean0.2973791766831426
driven_lanedir_consec_min0.2973791766831426
driven_lanedir_max0.2973791766831426
driven_lanedir_mean0.2973791766831426
driven_lanedir_median0.2973791766831426
driven_lanedir_min0.2973791766831426
get_duckie_state_max1.290073133494756e-06
get_duckie_state_mean1.290073133494756e-06
get_duckie_state_median1.290073133494756e-06
get_duckie_state_min1.290073133494756e-06
get_robot_state_max0.003881124600972215
get_robot_state_mean0.003881124600972215
get_robot_state_median0.003881124600972215
get_robot_state_min0.003881124600972215
get_state_dump_max0.005071894763267203
get_state_dump_mean0.005071894763267203
get_state_dump_median0.005071894763267203
get_state_dump_min0.005071894763267203
get_ui_image_max0.022652952638390945
get_ui_image_mean0.022652952638390945
get_ui_image_median0.022652952638390945
get_ui_image_min0.022652952638390945
in-drivable-lane_max2.649999999999995
in-drivable-lane_mean2.649999999999995
in-drivable-lane_min2.649999999999995
per-episodes
details{"LF-norm-techtrack-000-ego0": {"driven_any": 0.9918003897860772, "get_ui_image": 0.022652952638390945, "step_physics": 0.10929419243172424, "survival_time": 3.599999999999995, "driven_lanedir": 0.2973791766831426, "get_state_dump": 0.005071894763267203, "get_robot_state": 0.003881124600972215, "sim_render-ego0": 0.004305555395884056, "get_duckie_state": 1.290073133494756e-06, "in-drivable-lane": 2.649999999999995, "deviation-heading": 0.24440125915023547, "agent_compute-ego0": 0.04699079957726884, "complete-iteration": 0.20526205023674116, "set_robot_commands": 0.002403510759954583, "distance-from-start": 0.95660253274137, "deviation-center-line": 0.0882049626628675, "driven_lanedir_consec": 0.2973791766831426, "sim_compute_sim_state": 0.008442757880851014, "sim_compute_performance-ego0": 0.002123257885240529}}
set_robot_commands_max0.002403510759954583
set_robot_commands_mean0.002403510759954583
set_robot_commands_median0.002403510759954583
set_robot_commands_min0.002403510759954583
sim_compute_performance-ego0_max0.002123257885240529
sim_compute_performance-ego0_mean0.002123257885240529
sim_compute_performance-ego0_median0.002123257885240529
sim_compute_performance-ego0_min0.002123257885240529
sim_compute_sim_state_max0.008442757880851014
sim_compute_sim_state_mean0.008442757880851014
sim_compute_sim_state_median0.008442757880851014
sim_compute_sim_state_min0.008442757880851014
sim_render-ego0_max0.004305555395884056
sim_render-ego0_mean0.004305555395884056
sim_render-ego0_median0.004305555395884056
sim_render-ego0_min0.004305555395884056
simulation-passed1
step_physics_max0.10929419243172424
step_physics_mean0.10929419243172424
step_physics_median0.10929419243172424
step_physics_min0.10929419243172424
survival_time_max3.599999999999995
survival_time_mean3.599999999999995
survival_time_min3.599999999999995
No reset possible
7536013696Samuel Alexandertemplate-tensorflowaido-LF-sim-validationsim-3of4successnogpu-production-spot-0-020:01:58
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.4076844744864361
survival_time_median2.4499999999999993
deviation-center-line_median0.07187871595267395
in-drivable-lane_median0.7999999999999985


other stats
agent_compute-ego0_max0.05480731964111328
agent_compute-ego0_mean0.05480731964111328
agent_compute-ego0_median0.05480731964111328
agent_compute-ego0_min0.05480731964111328
complete-iteration_max0.21577447891235352
complete-iteration_mean0.21577447891235352
complete-iteration_median0.21577447891235352
complete-iteration_min0.21577447891235352
deviation-center-line_max0.07187871595267395
deviation-center-line_mean0.07187871595267395
deviation-center-line_min0.07187871595267395
deviation-heading_max0.703306603792231
deviation-heading_mean0.703306603792231
deviation-heading_median0.703306603792231
deviation-heading_min0.703306603792231
distance-from-start_max0.42340042829816454
distance-from-start_mean0.42340042829816454
distance-from-start_median0.42340042829816454
distance-from-start_min0.42340042829816454
driven_any_max0.5176704447853718
driven_any_mean0.5176704447853718
driven_any_median0.5176704447853718
driven_any_min0.5176704447853718
driven_lanedir_consec_max0.4076844744864361
driven_lanedir_consec_mean0.4076844744864361
driven_lanedir_consec_min0.4076844744864361
driven_lanedir_max0.4076844744864361
driven_lanedir_mean0.4076844744864361
driven_lanedir_median0.4076844744864361
driven_lanedir_min0.4076844744864361
get_duckie_state_max1.2302398681640623e-06
get_duckie_state_mean1.2302398681640623e-06
get_duckie_state_median1.2302398681640623e-06
get_duckie_state_min1.2302398681640623e-06
get_robot_state_max0.0037072420120239256
get_robot_state_mean0.0037072420120239256
get_robot_state_median0.0037072420120239256
get_robot_state_min0.0037072420120239256
get_state_dump_max0.0048614883422851566
get_state_dump_mean0.0048614883422851566
get_state_dump_median0.0048614883422851566
get_state_dump_min0.0048614883422851566
get_ui_image_max0.02418828010559082
get_ui_image_mean0.02418828010559082
get_ui_image_median0.02418828010559082
get_ui_image_min0.02418828010559082
in-drivable-lane_max0.7999999999999985
in-drivable-lane_mean0.7999999999999985
in-drivable-lane_min0.7999999999999985
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 0.5176704447853718, "get_ui_image": 0.02418828010559082, "step_physics": 0.11131393909454346, "survival_time": 2.4499999999999993, "driven_lanedir": 0.4076844744864361, "get_state_dump": 0.0048614883422851566, "get_robot_state": 0.0037072420120239256, "sim_render-ego0": 0.003975157737731934, "get_duckie_state": 1.2302398681640623e-06, "in-drivable-lane": 0.7999999999999985, "deviation-heading": 0.703306603792231, "agent_compute-ego0": 0.05480731964111328, "complete-iteration": 0.21577447891235352, "set_robot_commands": 0.0023377037048339846, "distance-from-start": 0.42340042829816454, "deviation-center-line": 0.07187871595267395, "driven_lanedir_consec": 0.4076844744864361, "sim_compute_sim_state": 0.008503999710083008, "sim_compute_performance-ego0": 0.00199953556060791}}
set_robot_commands_max0.0023377037048339846
set_robot_commands_mean0.0023377037048339846
set_robot_commands_median0.0023377037048339846
set_robot_commands_min0.0023377037048339846
sim_compute_performance-ego0_max0.00199953556060791
sim_compute_performance-ego0_mean0.00199953556060791
sim_compute_performance-ego0_median0.00199953556060791
sim_compute_performance-ego0_min0.00199953556060791
sim_compute_sim_state_max0.008503999710083008
sim_compute_sim_state_mean0.008503999710083008
sim_compute_sim_state_median0.008503999710083008
sim_compute_sim_state_min0.008503999710083008
sim_render-ego0_max0.003975157737731934
sim_render-ego0_mean0.003975157737731934
sim_render-ego0_median0.003975157737731934
sim_render-ego0_min0.003975157737731934
simulation-passed1
step_physics_max0.11131393909454346
step_physics_mean0.11131393909454346
step_physics_median0.11131393909454346
step_physics_min0.11131393909454346
survival_time_max2.4499999999999993
survival_time_mean2.4499999999999993
survival_time_min2.4499999999999993
No reset possible
7535813696Samuel Alexandertemplate-tensorflowaido-LF-sim-validationsim-3of4successnogpu-production-spot-0-020:01:13
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.33944451261395026
survival_time_median2.3499999999999996
deviation-center-line_median0.06213404723453336
in-drivable-lane_median0.7999999999999989


other stats
agent_compute-ego0_max0.0586631844441096
agent_compute-ego0_mean0.0586631844441096
agent_compute-ego0_median0.0586631844441096
agent_compute-ego0_min0.0586631844441096
complete-iteration_max0.2220446765422821
complete-iteration_mean0.2220446765422821
complete-iteration_median0.2220446765422821
complete-iteration_min0.2220446765422821
deviation-center-line_max0.06213404723453336
deviation-center-line_mean0.06213404723453336
deviation-center-line_min0.06213404723453336
deviation-heading_max0.6809572281563235
deviation-heading_mean0.6809572281563235
deviation-heading_median0.6809572281563235
deviation-heading_min0.6809572281563235
distance-from-start_max0.3603733708626732
distance-from-start_mean0.3603733708626732
distance-from-start_median0.3603733708626732
distance-from-start_min0.3603733708626732
driven_any_max0.4523698782739165
driven_any_mean0.4523698782739165
driven_any_median0.4523698782739165
driven_any_min0.4523698782739165
driven_lanedir_consec_max0.33944451261395026
driven_lanedir_consec_mean0.33944451261395026
driven_lanedir_consec_min0.33944451261395026
driven_lanedir_max0.33944451261395026
driven_lanedir_mean0.33944451261395026
driven_lanedir_median0.33944451261395026
driven_lanedir_min0.33944451261395026
get_duckie_state_max1.7434358596801758e-06
get_duckie_state_mean1.7434358596801758e-06
get_duckie_state_median1.7434358596801758e-06
get_duckie_state_min1.7434358596801758e-06
get_robot_state_max0.004018823305765788
get_robot_state_mean0.004018823305765788
get_robot_state_median0.004018823305765788
get_robot_state_min0.004018823305765788
get_state_dump_max0.005413606762886047
get_state_dump_mean0.005413606762886047
get_state_dump_median0.005413606762886047
get_state_dump_min0.005413606762886047
get_ui_image_max0.024891957640647888
get_ui_image_mean0.024891957640647888
get_ui_image_median0.024891957640647888
get_ui_image_min0.024891957640647888
in-drivable-lane_max0.7999999999999989
in-drivable-lane_mean0.7999999999999989
in-drivable-lane_min0.7999999999999989
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 0.4523698782739165, "get_ui_image": 0.024891957640647888, "step_physics": 0.11094280083974202, "survival_time": 2.3499999999999996, "driven_lanedir": 0.33944451261395026, "get_state_dump": 0.005413606762886047, "get_robot_state": 0.004018823305765788, "sim_render-ego0": 0.004181618491808574, "get_duckie_state": 1.7434358596801758e-06, "in-drivable-lane": 0.7999999999999989, "deviation-heading": 0.6809572281563235, "agent_compute-ego0": 0.0586631844441096, "complete-iteration": 0.2220446765422821, "set_robot_commands": 0.0024268577496210733, "distance-from-start": 0.3603733708626732, "deviation-center-line": 0.06213404723453336, "driven_lanedir_consec": 0.33944451261395026, "sim_compute_sim_state": 0.009179631868998207, "sim_compute_performance-ego0": 0.002227991819381714}}
set_robot_commands_max0.0024268577496210733
set_robot_commands_mean0.0024268577496210733
set_robot_commands_median0.0024268577496210733
set_robot_commands_min0.0024268577496210733
sim_compute_performance-ego0_max0.002227991819381714
sim_compute_performance-ego0_mean0.002227991819381714
sim_compute_performance-ego0_median0.002227991819381714
sim_compute_performance-ego0_min0.002227991819381714
sim_compute_sim_state_max0.009179631868998207
sim_compute_sim_state_mean0.009179631868998207
sim_compute_sim_state_median0.009179631868998207
sim_compute_sim_state_min0.009179631868998207
sim_render-ego0_max0.004181618491808574
sim_render-ego0_mean0.004181618491808574
sim_render-ego0_median0.004181618491808574
sim_render-ego0_min0.004181618491808574
simulation-passed1
step_physics_max0.11094280083974202
step_physics_mean0.11094280083974202
step_physics_median0.11094280083974202
step_physics_min0.11094280083974202
survival_time_max2.3499999999999996
survival_time_mean2.3499999999999996
survival_time_min2.3499999999999996
No reset possible
7535513696Samuel Alexandertemplate-tensorflowaido-LF-sim-validationsim-3of4successnogpu-production-spot-0-020:01:17
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.40287229358792453
survival_time_median2.499999999999999
deviation-center-line_median0.06961416411680182
in-drivable-lane_median0.7999999999999983


other stats
agent_compute-ego0_max0.05668015106051576
agent_compute-ego0_mean0.05668015106051576
agent_compute-ego0_median0.05668015106051576
agent_compute-ego0_min0.05668015106051576
complete-iteration_max0.22631023444381415
complete-iteration_mean0.22631023444381415
complete-iteration_median0.22631023444381415
complete-iteration_min0.22631023444381415
deviation-center-line_max0.06961416411680182
deviation-center-line_mean0.06961416411680182
deviation-center-line_min0.06961416411680182
deviation-heading_max0.7298354586230998
deviation-heading_mean0.7298354586230998
deviation-heading_median0.7298354586230998
deviation-heading_min0.7298354586230998
distance-from-start_max0.4198036794019029
distance-from-start_mean0.4198036794019029
distance-from-start_median0.4198036794019029
distance-from-start_min0.4198036794019029
driven_any_max0.5258130277193179
driven_any_mean0.5258130277193179
driven_any_median0.5258130277193179
driven_any_min0.5258130277193179
driven_lanedir_consec_max0.40287229358792453
driven_lanedir_consec_mean0.40287229358792453
driven_lanedir_consec_min0.40287229358792453
driven_lanedir_max0.40287229358792453
driven_lanedir_mean0.40287229358792453
driven_lanedir_median0.40287229358792453
driven_lanedir_min0.40287229358792453
get_duckie_state_max2.1831662047143075e-06
get_duckie_state_mean2.1831662047143075e-06
get_duckie_state_median2.1831662047143075e-06
get_duckie_state_min2.1831662047143075e-06
get_robot_state_max0.003829296897439395
get_robot_state_mean0.003829296897439395
get_robot_state_median0.003829296897439395
get_robot_state_min0.003829296897439395
get_state_dump_max0.005097192876479205
get_state_dump_mean0.005097192876479205
get_state_dump_median0.005097192876479205
get_state_dump_min0.005097192876479205
get_ui_image_max0.02453410859201469
get_ui_image_mean0.02453410859201469
get_ui_image_median0.02453410859201469
get_ui_image_min0.02453410859201469
in-drivable-lane_max0.7999999999999983
in-drivable-lane_mean0.7999999999999983
in-drivable-lane_min0.7999999999999983
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 0.5258130277193179, "get_ui_image": 0.02453410859201469, "step_physics": 0.11824140361711093, "survival_time": 2.499999999999999, "driven_lanedir": 0.40287229358792453, "get_state_dump": 0.005097192876479205, "get_robot_state": 0.003829296897439395, "sim_render-ego0": 0.004166350645177504, "get_duckie_state": 2.1831662047143075e-06, "in-drivable-lane": 0.7999999999999983, "deviation-heading": 0.7298354586230998, "agent_compute-ego0": 0.05668015106051576, "complete-iteration": 0.22631023444381415, "set_robot_commands": 0.002527204214357862, "distance-from-start": 0.4198036794019029, "deviation-center-line": 0.06961416411680182, "driven_lanedir_consec": 0.40287229358792453, "sim_compute_sim_state": 0.008991367676678826, "sim_compute_performance-ego0": 0.0021450239069321577}}
set_robot_commands_max0.002527204214357862
set_robot_commands_mean0.002527204214357862
set_robot_commands_median0.002527204214357862
set_robot_commands_min0.002527204214357862
sim_compute_performance-ego0_max0.0021450239069321577
sim_compute_performance-ego0_mean0.0021450239069321577
sim_compute_performance-ego0_median0.0021450239069321577
sim_compute_performance-ego0_min0.0021450239069321577
sim_compute_sim_state_max0.008991367676678826
sim_compute_sim_state_mean0.008991367676678826
sim_compute_sim_state_median0.008991367676678826
sim_compute_sim_state_min0.008991367676678826
sim_render-ego0_max0.004166350645177504
sim_render-ego0_mean0.004166350645177504
sim_render-ego0_median0.004166350645177504
sim_render-ego0_min0.004166350645177504
simulation-passed1
step_physics_max0.11824140361711093
step_physics_mean0.11824140361711093
step_physics_median0.11824140361711093
step_physics_min0.11824140361711093
survival_time_max2.499999999999999
survival_time_mean2.499999999999999
survival_time_min2.499999999999999
No reset possible
7535313696Samuel Alexandertemplate-tensorflowaido-LF-sim-validationsim-2of4successnogpu-production-spot-0-020:01:53
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.08398376798261276
survival_time_median7.84999999999998
deviation-center-line_median0.05150014469926985
in-drivable-lane_median7.29999999999998


other stats
agent_compute-ego0_max0.029005909267860123
agent_compute-ego0_mean0.029005909267860123
agent_compute-ego0_median0.029005909267860123
agent_compute-ego0_min0.029005909267860123
complete-iteration_max0.17670868921883498
complete-iteration_mean0.17670868921883498
complete-iteration_median0.17670868921883498
complete-iteration_min0.17670868921883498
deviation-center-line_max0.05150014469926985
deviation-center-line_mean0.05150014469926985
deviation-center-line_min0.05150014469926985
deviation-heading_max0.3965271485760152
deviation-heading_mean0.3965271485760152
deviation-heading_median0.3965271485760152
deviation-heading_min0.3965271485760152
distance-from-start_max1.6804942707305344
distance-from-start_mean1.6804942707305344
distance-from-start_median1.6804942707305344
distance-from-start_min1.6804942707305344
driven_any_max3.2115622256358813
driven_any_mean3.2115622256358813
driven_any_median3.2115622256358813
driven_any_min3.2115622256358813
driven_lanedir_consec_max0.08398376798261276
driven_lanedir_consec_mean0.08398376798261276
driven_lanedir_consec_min0.08398376798261276
driven_lanedir_max0.08398376798261276
driven_lanedir_mean0.08398376798261276
driven_lanedir_median0.08398376798261276
driven_lanedir_min0.08398376798261276
get_duckie_state_max1.4259845395631428e-06
get_duckie_state_mean1.4259845395631428e-06
get_duckie_state_median1.4259845395631428e-06
get_duckie_state_min1.4259845395631428e-06
get_robot_state_max0.004177362104005451
get_robot_state_mean0.004177362104005451
get_robot_state_median0.004177362104005451
get_robot_state_min0.004177362104005451
get_state_dump_max0.005183429657658444
get_state_dump_mean0.005183429657658444
get_state_dump_median0.005183429657658444
get_state_dump_min0.005183429657658444
get_ui_image_max0.018808852268170705
get_ui_image_mean0.018808852268170705
get_ui_image_median0.018808852268170705
get_ui_image_min0.018808852268170705
in-drivable-lane_max7.29999999999998
in-drivable-lane_mean7.29999999999998
in-drivable-lane_min7.29999999999998
per-episodes
details{"LF-norm-small_loop-000-ego0": {"driven_any": 3.2115622256358813, "get_ui_image": 0.018808852268170705, "step_physics": 0.10562026198906234, "survival_time": 7.84999999999998, "driven_lanedir": 0.08398376798261276, "get_state_dump": 0.005183429657658444, "get_robot_state": 0.004177362104005451, "sim_render-ego0": 0.004314697241481346, "get_duckie_state": 1.4259845395631428e-06, "in-drivable-lane": 7.29999999999998, "deviation-heading": 0.3965271485760152, "agent_compute-ego0": 0.029005909267860123, "complete-iteration": 0.17670868921883498, "set_robot_commands": 0.002380300171767609, "distance-from-start": 1.6804942707305344, "deviation-center-line": 0.05150014469926985, "driven_lanedir_consec": 0.08398376798261276, "sim_compute_sim_state": 0.004968499835533432, "sim_compute_performance-ego0": 0.002160061763811715}}
set_robot_commands_max0.002380300171767609
set_robot_commands_mean0.002380300171767609
set_robot_commands_median0.002380300171767609
set_robot_commands_min0.002380300171767609
sim_compute_performance-ego0_max0.002160061763811715
sim_compute_performance-ego0_mean0.002160061763811715
sim_compute_performance-ego0_median0.002160061763811715
sim_compute_performance-ego0_min0.002160061763811715
sim_compute_sim_state_max0.004968499835533432
sim_compute_sim_state_mean0.004968499835533432
sim_compute_sim_state_median0.004968499835533432
sim_compute_sim_state_min0.004968499835533432
sim_render-ego0_max0.004314697241481346
sim_render-ego0_mean0.004314697241481346
sim_render-ego0_median0.004314697241481346
sim_render-ego0_min0.004314697241481346
simulation-passed1
step_physics_max0.10562026198906234
step_physics_mean0.10562026198906234
step_physics_median0.10562026198906234
step_physics_min0.10562026198906234
survival_time_max7.84999999999998
survival_time_mean7.84999999999998
survival_time_min7.84999999999998
No reset possible
7534913910YU CHENCBC Net v2 - testaido-LFP-sim-validationsim-0of4successnogpu-production-spot-0-020:01:31
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median3.149999999999997
in-drivable-lane_median0.8000000000000006
driven_lanedir_consec_median0.37504163939521473
deviation-center-line_median0.12861670202910228


other stats
agent_compute-ego0_max0.0930991917848587
agent_compute-ego0_mean0.0930991917848587
agent_compute-ego0_median0.0930991917848587
agent_compute-ego0_min0.0930991917848587
complete-iteration_max0.2905862219631672
complete-iteration_mean0.2905862219631672
complete-iteration_median0.2905862219631672
complete-iteration_min0.2905862219631672
deviation-center-line_max0.12861670202910228
deviation-center-line_mean0.12861670202910228
deviation-center-line_min0.12861670202910228
deviation-heading_max0.9299157445221466
deviation-heading_mean0.9299157445221466
deviation-heading_median0.9299157445221466
deviation-heading_min0.9299157445221466
distance-from-start_max0.8098591621730764
distance-from-start_mean0.8098591621730764
distance-from-start_median0.8098591621730764
distance-from-start_min0.8098591621730764
driven_any_max0.8449235731721684
driven_any_mean0.8449235731721684
driven_any_median0.8449235731721684
driven_any_min0.8449235731721684
driven_lanedir_consec_max0.37504163939521473
driven_lanedir_consec_mean0.37504163939521473
driven_lanedir_consec_min0.37504163939521473
driven_lanedir_max0.37504163939521473
driven_lanedir_mean0.37504163939521473
driven_lanedir_median0.37504163939521473
driven_lanedir_min0.37504163939521473
get_duckie_state_max0.020575065165758133
get_duckie_state_mean0.020575065165758133
get_duckie_state_median0.020575065165758133
get_duckie_state_min0.020575065165758133
get_robot_state_max0.003746684640645981
get_robot_state_mean0.003746684640645981
get_robot_state_median0.003746684640645981
get_robot_state_min0.003746684640645981
get_state_dump_max0.008067194372415543
get_state_dump_mean0.008067194372415543
get_state_dump_median0.008067194372415543
get_state_dump_min0.008067194372415543
get_ui_image_max0.024599500000476837
get_ui_image_mean0.024599500000476837
get_ui_image_median0.024599500000476837
get_ui_image_min0.024599500000476837
in-drivable-lane_max0.8000000000000006
in-drivable-lane_mean0.8000000000000006
in-drivable-lane_min0.8000000000000006
per-episodes
details{"LFP-norm-zigzag-000-ego0": {"driven_any": 0.8449235731721684, "get_ui_image": 0.024599500000476837, "step_physics": 0.12073969095945358, "survival_time": 3.149999999999997, "driven_lanedir": 0.37504163939521473, "get_state_dump": 0.008067194372415543, "get_robot_state": 0.003746684640645981, "sim_render-ego0": 0.0038818828761577606, "get_duckie_state": 0.020575065165758133, "in-drivable-lane": 0.8000000000000006, "deviation-heading": 0.9299157445221466, "agent_compute-ego0": 0.0930991917848587, "complete-iteration": 0.2905862219631672, "set_robot_commands": 0.002436406910419464, "distance-from-start": 0.8098591621730764, "deviation-center-line": 0.12861670202910228, "driven_lanedir_consec": 0.37504163939521473, "sim_compute_sim_state": 0.0113728828728199, "sim_compute_performance-ego0": 0.0019626840949058533}}
set_robot_commands_max0.002436406910419464
set_robot_commands_mean0.002436406910419464
set_robot_commands_median0.002436406910419464
set_robot_commands_min0.002436406910419464
sim_compute_performance-ego0_max0.0019626840949058533
sim_compute_performance-ego0_mean0.0019626840949058533
sim_compute_performance-ego0_median0.0019626840949058533
sim_compute_performance-ego0_min0.0019626840949058533
sim_compute_sim_state_max0.0113728828728199
sim_compute_sim_state_mean0.0113728828728199
sim_compute_sim_state_median0.0113728828728199
sim_compute_sim_state_min0.0113728828728199
sim_render-ego0_max0.0038818828761577606
sim_render-ego0_mean0.0038818828761577606
sim_render-ego0_median0.0038818828761577606
sim_render-ego0_min0.0038818828761577606
simulation-passed1
step_physics_max0.12073969095945358
step_physics_mean0.12073969095945358
step_physics_median0.12073969095945358
step_physics_min0.12073969095945358
survival_time_max3.149999999999997
survival_time_mean3.149999999999997
survival_time_min3.149999999999997
No reset possible
7534413910YU CHENCBC Net v2 - testaido-LFP-sim-validationsim-3of4successnogpu-production-spot-0-020:04:37
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median20.35000000000015
in-drivable-lane_median8.600000000000055
driven_lanedir_consec_median3.554338075252989
deviation-center-line_median0.8933150069320759


other stats
agent_compute-ego0_max0.09114047768069249
agent_compute-ego0_mean0.09114047768069249
agent_compute-ego0_median0.09114047768069249
agent_compute-ego0_min0.09114047768069249
complete-iteration_max0.2857883934881173
complete-iteration_mean0.2857883934881173
complete-iteration_median0.2857883934881173
complete-iteration_min0.2857883934881173
deviation-center-line_max0.8933150069320759
deviation-center-line_mean0.8933150069320759
deviation-center-line_min0.8933150069320759
deviation-heading_max4.4081604106973
deviation-heading_mean4.4081604106973
deviation-heading_median4.4081604106973
deviation-heading_min4.4081604106973
distance-from-start_max3.1655508836353596
distance-from-start_mean3.1655508836353596
distance-from-start_median3.1655508836353596
distance-from-start_min3.1655508836353596
driven_any_max8.126975284264976
driven_any_mean8.126975284264976
driven_any_median8.126975284264976
driven_any_min8.126975284264976
driven_lanedir_consec_max3.554338075252989
driven_lanedir_consec_mean3.554338075252989
driven_lanedir_consec_min3.554338075252989
driven_lanedir_max3.554338075252989
driven_lanedir_mean3.554338075252989
driven_lanedir_median3.554338075252989
driven_lanedir_min3.554338075252989
get_duckie_state_max0.020665484316208783
get_duckie_state_mean0.020665484316208783
get_duckie_state_median0.020665484316208783
get_duckie_state_min0.020665484316208783
get_robot_state_max0.003719149851331524
get_robot_state_mean0.003719149851331524
get_robot_state_median0.003719149851331524
get_robot_state_min0.003719149851331524
get_state_dump_max0.008017651590646482
get_state_dump_mean0.008017651590646482
get_state_dump_median0.008017651590646482
get_state_dump_min0.008017651590646482
get_ui_image_max0.023786397541270536
get_ui_image_mean0.023786397541270536
get_ui_image_median0.023786397541270536
get_ui_image_min0.023786397541270536
in-drivable-lane_max8.600000000000055
in-drivable-lane_mean8.600000000000055
in-drivable-lane_min8.600000000000055
per-episodes
details{"LFP-norm-techtrack-000-ego0": {"driven_any": 8.126975284264976, "get_ui_image": 0.023786397541270536, "step_physics": 0.11866506934165956, "survival_time": 20.35000000000015, "driven_lanedir": 3.554338075252989, "get_state_dump": 0.008017651590646482, "get_robot_state": 0.003719149851331524, "sim_render-ego0": 0.003949255335564707, "get_duckie_state": 0.020665484316208783, "in-drivable-lane": 8.600000000000055, "deviation-heading": 4.4081604106973, "agent_compute-ego0": 0.09114047768069249, "complete-iteration": 0.2857883934881173, "set_robot_commands": 0.002430638261869842, "distance-from-start": 3.1655508836353596, "deviation-center-line": 0.8933150069320759, "driven_lanedir_consec": 3.554338075252989, "sim_compute_sim_state": 0.011303049092199287, "sim_compute_performance-ego0": 0.0020070847342996035}}
set_robot_commands_max0.002430638261869842
set_robot_commands_mean0.002430638261869842
set_robot_commands_median0.002430638261869842
set_robot_commands_min0.002430638261869842
sim_compute_performance-ego0_max0.0020070847342996035
sim_compute_performance-ego0_mean0.0020070847342996035
sim_compute_performance-ego0_median0.0020070847342996035
sim_compute_performance-ego0_min0.0020070847342996035
sim_compute_sim_state_max0.011303049092199287
sim_compute_sim_state_mean0.011303049092199287
sim_compute_sim_state_median0.011303049092199287
sim_compute_sim_state_min0.011303049092199287
sim_render-ego0_max0.003949255335564707
sim_render-ego0_mean0.003949255335564707
sim_render-ego0_median0.003949255335564707
sim_render-ego0_min0.003949255335564707
simulation-passed1
step_physics_max0.11866506934165956
step_physics_mean0.11866506934165956
step_physics_median0.11866506934165956
step_physics_min0.11866506934165956
survival_time_max20.35000000000015
survival_time_mean20.35000000000015
survival_time_min20.35000000000015
No reset possible
7533913912YU CHENCBC Net v2 - testaido-LFP-sim-validationsim-3of4successnogpu-production-spot-0-020:05:48
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median24.35000000000021
in-drivable-lane_median4.500000000000041
driven_lanedir_consec_median6.679955512231967
deviation-center-line_median1.7225642713293403


other stats
agent_compute-ego0_max0.0993711577087152
agent_compute-ego0_mean0.0993711577087152
agent_compute-ego0_median0.0993711577087152
agent_compute-ego0_min0.0993711577087152
complete-iteration_max0.338966855748755
complete-iteration_mean0.338966855748755
complete-iteration_median0.338966855748755
complete-iteration_min0.338966855748755
deviation-center-line_max1.7225642713293403
deviation-center-line_mean1.7225642713293403
deviation-center-line_min1.7225642713293403
deviation-heading_max6.460057751069731
deviation-heading_mean6.460057751069731
deviation-heading_median6.460057751069731
deviation-heading_min6.460057751069731
distance-from-start_max3.2074800478301815
distance-from-start_mean3.2074800478301815
distance-from-start_median3.2074800478301815
distance-from-start_min3.2074800478301815
driven_any_max8.86806158620547
driven_any_mean8.86806158620547
driven_any_median8.86806158620547
driven_any_min8.86806158620547
driven_lanedir_consec_max6.679955512231967
driven_lanedir_consec_mean6.679955512231967
driven_lanedir_consec_min6.679955512231967
driven_lanedir_max6.679955512231967
driven_lanedir_mean6.679955512231967
driven_lanedir_median6.679955512231967
driven_lanedir_min6.679955512231967
get_duckie_state_max0.02366757002033171
get_duckie_state_mean0.02366757002033171
get_duckie_state_median0.02366757002033171
get_duckie_state_min0.02366757002033171
get_robot_state_max0.004361944609001035
get_robot_state_mean0.004361944609001035
get_robot_state_median0.004361944609001035
get_robot_state_min0.004361944609001035
get_state_dump_max0.008839636552529256
get_state_dump_mean0.008839636552529256
get_state_dump_median0.008839636552529256
get_state_dump_min0.008839636552529256
get_ui_image_max0.02528838104889041
get_ui_image_mean0.02528838104889041
get_ui_image_median0.02528838104889041
get_ui_image_min0.02528838104889041
in-drivable-lane_max4.500000000000041
in-drivable-lane_mean4.500000000000041
in-drivable-lane_min4.500000000000041
per-episodes
details{"LFP-norm-techtrack-000-ego0": {"driven_any": 8.86806158620547, "get_ui_image": 0.02528838104889041, "step_physics": 0.15543364843384164, "survival_time": 24.35000000000021, "driven_lanedir": 6.679955512231967, "get_state_dump": 0.008839636552529256, "get_robot_state": 0.004361944609001035, "sim_render-ego0": 0.004383517581908429, "get_duckie_state": 0.02366757002033171, "in-drivable-lane": 4.500000000000041, "deviation-heading": 6.460057751069731, "agent_compute-ego0": 0.0993711577087152, "complete-iteration": 0.338966855748755, "set_robot_commands": 0.0027021443257566357, "distance-from-start": 3.2074800478301815, "deviation-center-line": 1.7225642713293403, "driven_lanedir_consec": 6.679955512231967, "sim_compute_sim_state": 0.012484388761833066, "sim_compute_performance-ego0": 0.002317571249164519}}
set_robot_commands_max0.0027021443257566357
set_robot_commands_mean0.0027021443257566357
set_robot_commands_median0.0027021443257566357
set_robot_commands_min0.0027021443257566357
sim_compute_performance-ego0_max0.002317571249164519
sim_compute_performance-ego0_mean0.002317571249164519
sim_compute_performance-ego0_median0.002317571249164519
sim_compute_performance-ego0_min0.002317571249164519
sim_compute_sim_state_max0.012484388761833066
sim_compute_sim_state_mean0.012484388761833066
sim_compute_sim_state_median0.012484388761833066
sim_compute_sim_state_min0.012484388761833066
sim_render-ego0_max0.004383517581908429
sim_render-ego0_mean0.004383517581908429
sim_render-ego0_median0.004383517581908429
sim_render-ego0_min0.004383517581908429
simulation-passed1
step_physics_max0.15543364843384164
step_physics_mean0.15543364843384164
step_physics_median0.15543364843384164
step_physics_min0.15543364843384164
survival_time_max24.35000000000021
survival_time_mean24.35000000000021
survival_time_min24.35000000000021
No reset possible
7533813798Nicholas Kostelniktemplate-randomaido-hello-sim-validation370abortednogpu-production-spot-0-020:00:23
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 261, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/create?tag=sha256%3Ab13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691&fromImage=docker.io%2Fnitaigao%2Faido-submissions

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 65, in docker_pull
    pulling = client.api.pull(repository=repository, tag=br.tag, stream=True, decode=True)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/image.py", line 415, in pull
    self._raise_for_status(response)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 263, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/local/lib/python3.8/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.ImageNotFound: 404 Client Error: Not Found ("pull access denied for nitaigao/aido-submissions, repository does not exist or may require 'docker login': denied: requested access to the resource is denied")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 38, in docker_pull_retry
    return docker_pull(client, image_name, quiet=quiet)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 84, in docker_pull
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: Cannot pull repo  docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691  tag  None

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 745, in get_cr
    cr = run_single(
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 944, in run_single
    docker_pull_retry(client, image, ntimes=4, wait=5)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 42, in docker_pull_retry
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: After trying 4 I still could not pull docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7533313730YU CHENBC Net V2aido-LF-sim-validationsim-3of4successnogpu-production-spot-0-020:01:29
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median1.2505025005135824
survival_time_median4.3499999999999925
deviation-center-line_median0.1638225555505191
in-drivable-lane_median1.099999999999996


other stats
agent_compute-ego0_max0.05799566344781355
agent_compute-ego0_mean0.05799566344781355
agent_compute-ego0_median0.05799566344781355
agent_compute-ego0_min0.05799566344781355
complete-iteration_max0.21236690066077493
complete-iteration_mean0.21236690066077493
complete-iteration_median0.21236690066077493
complete-iteration_min0.21236690066077493
deviation-center-line_max0.1638225555505191
deviation-center-line_mean0.1638225555505191
deviation-center-line_min0.1638225555505191
deviation-heading_max0.8672925589035286
deviation-heading_mean0.8672925589035286
deviation-heading_median0.8672925589035286
deviation-heading_min0.8672925589035286
distance-from-start_max1.2952015124171343
distance-from-start_mean1.2952015124171343
distance-from-start_median1.2952015124171343
distance-from-start_min1.2952015124171343
driven_any_max1.4661045068003673
driven_any_mean1.4661045068003673
driven_any_median1.4661045068003673
driven_any_min1.4661045068003673
driven_lanedir_consec_max1.2505025005135824
driven_lanedir_consec_mean1.2505025005135824
driven_lanedir_consec_min1.2505025005135824
driven_lanedir_max1.2505025005135824
driven_lanedir_mean1.2505025005135824
driven_lanedir_median1.2505025005135824
driven_lanedir_min1.2505025005135824
get_duckie_state_max1.7556277188387783e-06
get_duckie_state_mean1.7556277188387783e-06
get_duckie_state_median1.7556277188387783e-06
get_duckie_state_min1.7556277188387783e-06
get_robot_state_max0.00383111834526062
get_robot_state_mean0.00383111834526062
get_robot_state_median0.00383111834526062
get_robot_state_min0.00383111834526062
get_state_dump_max0.0048170956698330965
get_state_dump_mean0.0048170956698330965
get_state_dump_median0.0048170956698330965
get_state_dump_min0.0048170956698330965
get_ui_image_max0.02363965998996388
get_ui_image_mean0.02363965998996388
get_ui_image_median0.02363965998996388
get_ui_image_min0.02363965998996388
in-drivable-lane_max1.099999999999996
in-drivable-lane_mean1.099999999999996
in-drivable-lane_min1.099999999999996
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 1.4661045068003673, "get_ui_image": 0.02363965998996388, "step_physics": 0.10434130376035516, "survival_time": 4.3499999999999925, "driven_lanedir": 1.2505025005135824, "get_state_dump": 0.0048170956698330965, "get_robot_state": 0.00383111834526062, "sim_render-ego0": 0.0040871338410810995, "get_duckie_state": 1.7556277188387783e-06, "in-drivable-lane": 1.099999999999996, "deviation-heading": 0.8672925589035286, "agent_compute-ego0": 0.05799566344781355, "complete-iteration": 0.21236690066077493, "set_robot_commands": 0.002500859173861417, "distance-from-start": 1.2952015124171343, "deviation-center-line": 0.1638225555505191, "driven_lanedir_consec": 1.2505025005135824, "sim_compute_sim_state": 0.009047952565279876, "sim_compute_performance-ego0": 0.002015606923536821}}
set_robot_commands_max0.002500859173861417
set_robot_commands_mean0.002500859173861417
set_robot_commands_median0.002500859173861417
set_robot_commands_min0.002500859173861417
sim_compute_performance-ego0_max0.002015606923536821
sim_compute_performance-ego0_mean0.002015606923536821
sim_compute_performance-ego0_median0.002015606923536821
sim_compute_performance-ego0_min0.002015606923536821
sim_compute_sim_state_max0.009047952565279876
sim_compute_sim_state_mean0.009047952565279876
sim_compute_sim_state_median0.009047952565279876
sim_compute_sim_state_min0.009047952565279876
sim_render-ego0_max0.0040871338410810995
sim_render-ego0_mean0.0040871338410810995
sim_render-ego0_median0.0040871338410810995
sim_render-ego0_min0.0040871338410810995
simulation-passed1
step_physics_max0.10434130376035516
step_physics_mean0.10434130376035516
step_physics_median0.10434130376035516
step_physics_min0.10434130376035516
survival_time_max4.3499999999999925
survival_time_mean4.3499999999999925
survival_time_min4.3499999999999925
No reset possible
7533013730YU CHENBC Net V2aido-LF-sim-validationsim-3of4successnogpu-production-spot-0-020:02:12
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median1.1754427312604734
survival_time_median7.89999999999998
deviation-center-line_median0.15157781677202775
in-drivable-lane_median4.849999999999983


other stats
agent_compute-ego0_max0.06085264008000211
agent_compute-ego0_mean0.06085264008000211
agent_compute-ego0_median0.06085264008000211
agent_compute-ego0_min0.06085264008000211
complete-iteration_max0.23300353086219644
complete-iteration_mean0.23300353086219644
complete-iteration_median0.23300353086219644
complete-iteration_min0.23300353086219644
deviation-center-line_max0.15157781677202775
deviation-center-line_mean0.15157781677202775
deviation-center-line_min0.15157781677202775
deviation-heading_max0.7912632531298347
deviation-heading_mean0.7912632531298347
deviation-heading_median0.7912632531298347
deviation-heading_min0.7912632531298347
distance-from-start_max2.426034812305827
distance-from-start_mean2.426034812305827
distance-from-start_median2.426034812305827
distance-from-start_min2.426034812305827
driven_any_max2.7408792656231658
driven_any_mean2.7408792656231658
driven_any_median2.7408792656231658
driven_any_min2.7408792656231658
driven_lanedir_consec_max1.1754427312604734
driven_lanedir_consec_mean1.1754427312604734
driven_lanedir_consec_min1.1754427312604734
driven_lanedir_max1.1754427312604734
driven_lanedir_mean1.1754427312604734
driven_lanedir_median1.1754427312604734
driven_lanedir_min1.1754427312604734
get_duckie_state_max1.7064172516828812e-06
get_duckie_state_mean1.7064172516828812e-06
get_duckie_state_median1.7064172516828812e-06
get_duckie_state_min1.7064172516828812e-06
get_robot_state_max0.0041198955391937835
get_robot_state_mean0.0041198955391937835
get_robot_state_median0.0041198955391937835
get_robot_state_min0.0041198955391937835
get_state_dump_max0.005334930599860425
get_state_dump_mean0.005334930599860425
get_state_dump_median0.005334930599860425
get_state_dump_min0.005334930599860425
get_ui_image_max0.025493065516153973
get_ui_image_mean0.025493065516153973
get_ui_image_median0.025493065516153973
get_ui_image_min0.025493065516153973
in-drivable-lane_max4.849999999999983
in-drivable-lane_mean4.849999999999983
in-drivable-lane_min4.849999999999983
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 2.7408792656231658, "get_ui_image": 0.025493065516153973, "step_physics": 0.11819243281142516, "survival_time": 7.89999999999998, "driven_lanedir": 1.1754427312604734, "get_state_dump": 0.005334930599860425, "get_robot_state": 0.0041198955391937835, "sim_render-ego0": 0.004451027456319557, "get_duckie_state": 1.7064172516828812e-06, "in-drivable-lane": 4.849999999999983, "deviation-heading": 0.7912632531298347, "agent_compute-ego0": 0.06085264008000211, "complete-iteration": 0.23300353086219644, "set_robot_commands": 0.0026943473695958935, "distance-from-start": 2.426034812305827, "deviation-center-line": 0.15157781677202775, "driven_lanedir_consec": 1.1754427312604734, "sim_compute_sim_state": 0.00962491605266835, "sim_compute_performance-ego0": 0.00214431420812067}}
set_robot_commands_max0.0026943473695958935
set_robot_commands_mean0.0026943473695958935
set_robot_commands_median0.0026943473695958935
set_robot_commands_min0.0026943473695958935
sim_compute_performance-ego0_max0.00214431420812067
sim_compute_performance-ego0_mean0.00214431420812067
sim_compute_performance-ego0_median0.00214431420812067
sim_compute_performance-ego0_min0.00214431420812067
sim_compute_sim_state_max0.00962491605266835
sim_compute_sim_state_mean0.00962491605266835
sim_compute_sim_state_median0.00962491605266835
sim_compute_sim_state_min0.00962491605266835
sim_render-ego0_max0.004451027456319557
sim_render-ego0_mean0.004451027456319557
sim_render-ego0_median0.004451027456319557
sim_render-ego0_min0.004451027456319557
simulation-passed1
step_physics_max0.11819243281142516
step_physics_mean0.11819243281142516
step_physics_median0.11819243281142516
step_physics_min0.11819243281142516
survival_time_max7.89999999999998
survival_time_mean7.89999999999998
survival_time_min7.89999999999998
No reset possible
7531513941YU CHENCBC Net v2 test - added mar 31 anomaly + mar 28 bcaido-LFP-sim-validationsim-3of4successnogpu-production-spot-0-020:11:26
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median59.99999999999873
in-drivable-lane_median23.999999999999552
driven_lanedir_consec_median12.646622103244836
deviation-center-line_median3.1050083610433195


other stats
agent_compute-ego0_max0.09263039786650874
agent_compute-ego0_mean0.09263039786650874
agent_compute-ego0_median0.09263039786650874
agent_compute-ego0_min0.09263039786650874
complete-iteration_max0.2903149975626594
complete-iteration_mean0.2903149975626594
complete-iteration_median0.2903149975626594
complete-iteration_min0.2903149975626594
deviation-center-line_max3.1050083610433195
deviation-center-line_mean3.1050083610433195
deviation-center-line_min3.1050083610433195
deviation-heading_max12.366267645061033
deviation-heading_mean12.366267645061033
deviation-heading_median12.366267645061033
deviation-heading_min12.366267645061033
distance-from-start_max3.128377503087781
distance-from-start_mean3.128377503087781
distance-from-start_median3.128377503087781
distance-from-start_min3.128377503087781
driven_any_max25.14429205274561
driven_any_mean25.14429205274561
driven_any_median25.14429205274561
driven_any_min25.14429205274561
driven_lanedir_consec_max12.646622103244836
driven_lanedir_consec_mean12.646622103244836
driven_lanedir_consec_min12.646622103244836
driven_lanedir_max12.646622103244836
driven_lanedir_mean12.646622103244836
driven_lanedir_median12.646622103244836
driven_lanedir_min12.646622103244836
get_duckie_state_max0.021009485290806856
get_duckie_state_mean0.021009485290806856
get_duckie_state_median0.021009485290806856
get_duckie_state_min0.021009485290806856
get_robot_state_max0.003801329546824383
get_robot_state_mean0.003801329546824383
get_robot_state_median0.003801329546824383
get_robot_state_min0.003801329546824383
get_state_dump_max0.008133626004043566
get_state_dump_mean0.008133626004043566
get_state_dump_median0.008133626004043566
get_state_dump_min0.008133626004043566
get_ui_image_max0.023568159336849217
get_ui_image_mean0.023568159336849217
get_ui_image_median0.023568159336849217
get_ui_image_min0.023568159336849217
in-drivable-lane_max23.999999999999552
in-drivable-lane_mean23.999999999999552
in-drivable-lane_min23.999999999999552
per-episodes
details{"LFP-norm-techtrack-000-ego0": {"driven_any": 25.14429205274561, "get_ui_image": 0.023568159336849217, "step_physics": 0.121248896374095, "survival_time": 59.99999999999873, "driven_lanedir": 12.646622103244836, "get_state_dump": 0.008133626004043566, "get_robot_state": 0.003801329546824383, "sim_render-ego0": 0.003928987707921012, "get_duckie_state": 0.021009485290806856, "in-drivable-lane": 23.999999999999552, "deviation-heading": 12.366267645061033, "agent_compute-ego0": 0.09263039786650874, "complete-iteration": 0.2903149975626594, "set_robot_commands": 0.002352051294217201, "distance-from-start": 3.128377503087781, "deviation-center-line": 3.1050083610433195, "driven_lanedir_consec": 12.646622103244836, "sim_compute_sim_state": 0.011588099199369688, "sim_compute_performance-ego0": 0.0019605926034055483}}
set_robot_commands_max0.002352051294217201
set_robot_commands_mean0.002352051294217201
set_robot_commands_median0.002352051294217201
set_robot_commands_min0.002352051294217201
sim_compute_performance-ego0_max0.0019605926034055483
sim_compute_performance-ego0_mean0.0019605926034055483
sim_compute_performance-ego0_median0.0019605926034055483
sim_compute_performance-ego0_min0.0019605926034055483
sim_compute_sim_state_max0.011588099199369688
sim_compute_sim_state_mean0.011588099199369688
sim_compute_sim_state_median0.011588099199369688
sim_compute_sim_state_min0.011588099199369688
sim_render-ego0_max0.003928987707921012
sim_render-ego0_mean0.003928987707921012
sim_render-ego0_median0.003928987707921012
sim_render-ego0_min0.003928987707921012
simulation-passed1
step_physics_max0.121248896374095
step_physics_mean0.121248896374095
step_physics_median0.121248896374095
step_physics_min0.121248896374095
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7530213732YU CHENBC Net V2aido-LF-sim-validationsim-1of4successnogpu-production-spot-0-020:07:50
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median8.763417221514384
survival_time_median33.150000000000254
deviation-center-line_median2.0912313931632913
in-drivable-lane_median7.85000000000003


other stats
agent_compute-ego0_max0.05465467543487089
agent_compute-ego0_mean0.05465467543487089
agent_compute-ego0_median0.05465467543487089
agent_compute-ego0_min0.05465467543487089
complete-iteration_max0.23832516713314747
complete-iteration_mean0.23832516713314747
complete-iteration_median0.23832516713314747
complete-iteration_min0.23832516713314747
deviation-center-line_max2.0912313931632913
deviation-center-line_mean2.0912313931632913
deviation-center-line_min2.0912313931632913
deviation-heading_max8.07061697065045
deviation-heading_mean8.07061697065045
deviation-heading_median8.07061697065045
deviation-heading_min8.07061697065045
distance-from-start_max3.587216793550055
distance-from-start_mean3.587216793550055
distance-from-start_median3.587216793550055
distance-from-start_min3.587216793550055
driven_any_max11.572883558959084
driven_any_mean11.572883558959084
driven_any_median11.572883558959084
driven_any_min11.572883558959084
driven_lanedir_consec_max8.763417221514384
driven_lanedir_consec_mean8.763417221514384
driven_lanedir_consec_min8.763417221514384
driven_lanedir_max8.763417221514384
driven_lanedir_mean8.763417221514384
driven_lanedir_median8.763417221514384
driven_lanedir_min8.763417221514384
get_duckie_state_max1.423689256231469e-06
get_duckie_state_mean1.423689256231469e-06
get_duckie_state_median1.423689256231469e-06
get_duckie_state_min1.423689256231469e-06
get_robot_state_max0.003856511001127312
get_robot_state_mean0.003856511001127312
get_robot_state_median0.003856511001127312
get_robot_state_min0.003856511001127312
get_state_dump_max0.0048923959215003325
get_state_dump_mean0.0048923959215003325
get_state_dump_median0.0048923959215003325
get_state_dump_min0.0048923959215003325
get_ui_image_max0.0233069787542504
get_ui_image_mean0.0233069787542504
get_ui_image_median0.0233069787542504
get_ui_image_min0.0233069787542504
in-drivable-lane_max7.85000000000003
in-drivable-lane_mean7.85000000000003
in-drivable-lane_min7.85000000000003
per-episodes
details{"LF-norm-techtrack-000-ego0": {"driven_any": 11.572883558959084, "get_ui_image": 0.0233069787542504, "step_physics": 0.13138081151318837, "survival_time": 33.150000000000254, "driven_lanedir": 8.763417221514384, "get_state_dump": 0.0048923959215003325, "get_robot_state": 0.003856511001127312, "sim_render-ego0": 0.004086409706667245, "get_duckie_state": 1.423689256231469e-06, "in-drivable-lane": 7.85000000000003, "deviation-heading": 8.07061697065045, "agent_compute-ego0": 0.05465467543487089, "complete-iteration": 0.23832516713314747, "set_robot_commands": 0.002420235111052731, "distance-from-start": 3.587216793550055, "deviation-center-line": 2.0912313931632913, "driven_lanedir_consec": 8.763417221514384, "sim_compute_sim_state": 0.011566441102200243, "sim_compute_performance-ego0": 0.002068228391279657}}
set_robot_commands_max0.002420235111052731
set_robot_commands_mean0.002420235111052731
set_robot_commands_median0.002420235111052731
set_robot_commands_min0.002420235111052731
sim_compute_performance-ego0_max0.002068228391279657
sim_compute_performance-ego0_mean0.002068228391279657
sim_compute_performance-ego0_median0.002068228391279657
sim_compute_performance-ego0_min0.002068228391279657
sim_compute_sim_state_max0.011566441102200243
sim_compute_sim_state_mean0.011566441102200243
sim_compute_sim_state_median0.011566441102200243
sim_compute_sim_state_min0.011566441102200243
sim_render-ego0_max0.004086409706667245
sim_render-ego0_mean0.004086409706667245
sim_render-ego0_median0.004086409706667245
sim_render-ego0_min0.004086409706667245
simulation-passed1
step_physics_max0.13138081151318837
step_physics_mean0.13138081151318837
step_physics_median0.13138081151318837
step_physics_min0.13138081151318837
survival_time_max33.150000000000254
survival_time_mean33.150000000000254
survival_time_min33.150000000000254
No reset possible
7529413911YU CHENCBC Net v2 - testaido-LF-sim-validationsim-1of4successnogpu-production-spot-0-020:10:36
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median17.94881251044596
survival_time_median59.99999999999873
deviation-center-line_median3.642961949762125
in-drivable-lane_median10.949999999999765


other stats
agent_compute-ego0_max0.09032945271634142
agent_compute-ego0_mean0.09032945271634142
agent_compute-ego0_median0.09032945271634142
agent_compute-ego0_min0.09032945271634142
complete-iteration_max0.2720116018554154
complete-iteration_mean0.2720116018554154
complete-iteration_median0.2720116018554154
complete-iteration_min0.2720116018554154
deviation-center-line_max3.642961949762125
deviation-center-line_mean3.642961949762125
deviation-center-line_min3.642961949762125
deviation-heading_max15.15445239228162
deviation-heading_mean15.15445239228162
deviation-heading_median15.15445239228162
deviation-heading_min15.15445239228162
distance-from-start_max3.448181058297971
distance-from-start_mean3.448181058297971
distance-from-start_median3.448181058297971
distance-from-start_min3.448181058297971
driven_any_max23.540749548788945
driven_any_mean23.540749548788945
driven_any_median23.540749548788945
driven_any_min23.540749548788945
driven_lanedir_consec_max17.94881251044596
driven_lanedir_consec_mean17.94881251044596
driven_lanedir_consec_min17.94881251044596
driven_lanedir_max17.94881251044596
driven_lanedir_mean17.94881251044596
driven_lanedir_median17.94881251044596
driven_lanedir_min17.94881251044596
get_duckie_state_max1.3499136868364904e-06
get_duckie_state_mean1.3499136868364904e-06
get_duckie_state_median1.3499136868364904e-06
get_duckie_state_min1.3499136868364904e-06
get_robot_state_max0.003775596618652344
get_robot_state_mean0.003775596618652344
get_robot_state_median0.003775596618652344
get_robot_state_min0.003775596618652344
get_state_dump_max0.004740343006524714
get_state_dump_mean0.004740343006524714
get_state_dump_median0.004740343006524714
get_state_dump_min0.004740343006524714
get_ui_image_max0.023198187301597627
get_ui_image_mean0.023198187301597627
get_ui_image_median0.023198187301597627
get_ui_image_min0.023198187301597627
in-drivable-lane_max10.949999999999765
in-drivable-lane_mean10.949999999999765
in-drivable-lane_min10.949999999999765
per-episodes
details{"LF-norm-techtrack-000-ego0": {"driven_any": 23.540749548788945, "get_ui_image": 0.023198187301597627, "step_physics": 0.13002943992614746, "survival_time": 59.99999999999873, "driven_lanedir": 17.94881251044596, "get_state_dump": 0.004740343006524714, "get_robot_state": 0.003775596618652344, "sim_render-ego0": 0.003971849452645257, "get_duckie_state": 1.3499136868364904e-06, "in-drivable-lane": 10.949999999999765, "deviation-heading": 15.15445239228162, "agent_compute-ego0": 0.09032945271634142, "complete-iteration": 0.2720116018554154, "set_robot_commands": 0.002385577591730097, "distance-from-start": 3.448181058297971, "deviation-center-line": 3.642961949762125, "driven_lanedir_consec": 17.94881251044596, "sim_compute_sim_state": 0.011488667137914651, "sim_compute_performance-ego0": 0.0020035804062461375}}
set_robot_commands_max0.002385577591730097
set_robot_commands_mean0.002385577591730097
set_robot_commands_median0.002385577591730097
set_robot_commands_min0.002385577591730097
sim_compute_performance-ego0_max0.0020035804062461375
sim_compute_performance-ego0_mean0.0020035804062461375
sim_compute_performance-ego0_median0.0020035804062461375
sim_compute_performance-ego0_min0.0020035804062461375
sim_compute_sim_state_max0.011488667137914651
sim_compute_sim_state_mean0.011488667137914651
sim_compute_sim_state_median0.011488667137914651
sim_compute_sim_state_min0.011488667137914651
sim_render-ego0_max0.003971849452645257
sim_render-ego0_mean0.003971849452645257
sim_render-ego0_median0.003971849452645257
sim_render-ego0_min0.003971849452645257
simulation-passed1
step_physics_max0.13002943992614746
step_physics_mean0.13002943992614746
step_physics_median0.13002943992614746
step_physics_min0.13002943992614746
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7528813944YU CHENCBC Net v2 test - added mar 31 anomaly + mar 28 bc_v1aido-LFP-sim-validationsim-3of4successnogpu-production-spot-0-020:04:32
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median20.60000000000016
in-drivable-lane_median8.600000000000056
driven_lanedir_consec_median3.610457265816446
deviation-center-line_median0.8015721477004076


other stats
agent_compute-ego0_max0.09433279083658362
agent_compute-ego0_mean0.09433279083658362
agent_compute-ego0_median0.09433279083658362
agent_compute-ego0_min0.09433279083658362
complete-iteration_max0.2965884047039485
complete-iteration_mean0.2965884047039485
complete-iteration_median0.2965884047039485
complete-iteration_min0.2965884047039485
deviation-center-line_max0.8015721477004076
deviation-center-line_mean0.8015721477004076
deviation-center-line_min0.8015721477004076
deviation-heading_max4.589484121305766
deviation-heading_mean4.589484121305766
deviation-heading_median4.589484121305766
deviation-heading_min4.589484121305766
distance-from-start_max3.1803291522388495
distance-from-start_mean3.1803291522388495
distance-from-start_median3.1803291522388495
distance-from-start_min3.1803291522388495
driven_any_max8.157414594036943
driven_any_mean8.157414594036943
driven_any_median8.157414594036943
driven_any_min8.157414594036943
driven_lanedir_consec_max3.610457265816446
driven_lanedir_consec_mean3.610457265816446
driven_lanedir_consec_min3.610457265816446
driven_lanedir_max3.610457265816446
driven_lanedir_mean3.610457265816446
driven_lanedir_median3.610457265816446
driven_lanedir_min3.610457265816446
get_duckie_state_max0.02124010794965176
get_duckie_state_mean0.02124010794965176
get_duckie_state_median0.02124010794965176
get_duckie_state_min0.02124010794965176
get_robot_state_max0.003853421522976411
get_robot_state_mean0.003853421522976411
get_robot_state_median0.003853421522976411
get_robot_state_min0.003853421522976411
get_state_dump_max0.008475176069984713
get_state_dump_mean0.008475176069984713
get_state_dump_median0.008475176069984713
get_state_dump_min0.008475176069984713
get_ui_image_max0.024035245396611764
get_ui_image_mean0.024035245396611764
get_ui_image_median0.024035245396611764
get_ui_image_min0.024035245396611764
in-drivable-lane_max8.600000000000056
in-drivable-lane_mean8.600000000000056
in-drivable-lane_min8.600000000000056
per-episodes
details{"LFP-norm-techtrack-000-ego0": {"driven_any": 8.157414594036943, "get_ui_image": 0.024035245396611764, "step_physics": 0.1243891358086907, "survival_time": 20.60000000000016, "driven_lanedir": 3.610457265816446, "get_state_dump": 0.008475176069984713, "get_robot_state": 0.003853421522976411, "sim_render-ego0": 0.004097292556023771, "get_duckie_state": 0.02124010794965176, "in-drivable-lane": 8.600000000000056, "deviation-heading": 4.589484121305766, "agent_compute-ego0": 0.09433279083658362, "complete-iteration": 0.2965884047039485, "set_robot_commands": 0.002451360369998664, "distance-from-start": 3.1803291522388495, "deviation-center-line": 0.8015721477004076, "driven_lanedir_consec": 3.610457265816446, "sim_compute_sim_state": 0.01156832750426655, "sim_compute_performance-ego0": 0.002024864169067678}}
set_robot_commands_max0.002451360369998664
set_robot_commands_mean0.002451360369998664
set_robot_commands_median0.002451360369998664
set_robot_commands_min0.002451360369998664
sim_compute_performance-ego0_max0.002024864169067678
sim_compute_performance-ego0_mean0.002024864169067678
sim_compute_performance-ego0_median0.002024864169067678
sim_compute_performance-ego0_min0.002024864169067678
sim_compute_sim_state_max0.01156832750426655
sim_compute_sim_state_mean0.01156832750426655
sim_compute_sim_state_median0.01156832750426655
sim_compute_sim_state_min0.01156832750426655
sim_render-ego0_max0.004097292556023771
sim_render-ego0_mean0.004097292556023771
sim_render-ego0_median0.004097292556023771
sim_render-ego0_min0.004097292556023771
simulation-passed1
step_physics_max0.1243891358086907
step_physics_mean0.1243891358086907
step_physics_median0.1243891358086907
step_physics_min0.1243891358086907
survival_time_max20.60000000000016
survival_time_mean20.60000000000016
survival_time_min20.60000000000016
No reset possible
7528113938YU CHENCBC Net v2 test - added mar 31 datasetaido-LF-sim-validationsim-1of4successnogpu-production-spot-0-020:10:36
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median12.896692921840334
survival_time_median59.99999999999873
deviation-center-line_median2.8701858210190867
in-drivable-lane_median26.099999999999305


other stats
agent_compute-ego0_max0.09283277811753958
agent_compute-ego0_mean0.09283277811753958
agent_compute-ego0_median0.09283277811753958
agent_compute-ego0_min0.09283277811753958
complete-iteration_max0.26290917297287053
complete-iteration_mean0.26290917297287053
complete-iteration_median0.26290917297287053
complete-iteration_min0.26290917297287053
deviation-center-line_max2.8701858210190867
deviation-center-line_mean2.8701858210190867
deviation-center-line_min2.8701858210190867
deviation-heading_max9.112335216963148
deviation-heading_mean9.112335216963148
deviation-heading_median9.112335216963148
deviation-heading_min9.112335216963148
distance-from-start_max3.550983288293658
distance-from-start_mean3.550983288293658
distance-from-start_median3.550983288293658
distance-from-start_min3.550983288293658
driven_any_max22.610253767117623
driven_any_mean22.610253767117623
driven_any_median22.610253767117623
driven_any_min22.610253767117623
driven_lanedir_consec_max12.896692921840334
driven_lanedir_consec_mean12.896692921840334
driven_lanedir_consec_min12.896692921840334
driven_lanedir_max12.896692921840334
driven_lanedir_mean12.896692921840334
driven_lanedir_median12.896692921840334
driven_lanedir_min12.896692921840334
get_duckie_state_max1.3544795713654962e-06
get_duckie_state_mean1.3544795713654962e-06
get_duckie_state_median1.3544795713654962e-06
get_duckie_state_min1.3544795713654962e-06
get_robot_state_max0.0037600855148404366
get_robot_state_mean0.0037600855148404366
get_robot_state_median0.0037600855148404366
get_robot_state_min0.0037600855148404366
get_state_dump_max0.004837817891650553
get_state_dump_mean0.004837817891650553
get_state_dump_median0.004837817891650553
get_state_dump_min0.004837817891650553
get_ui_image_max0.02297898950822943
get_ui_image_mean0.02297898950822943
get_ui_image_median0.02297898950822943
get_ui_image_min0.02297898950822943
in-drivable-lane_max26.099999999999305
in-drivable-lane_mean26.099999999999305
in-drivable-lane_min26.099999999999305
per-episodes
details{"LF-norm-techtrack-000-ego0": {"driven_any": 22.610253767117623, "get_ui_image": 0.02297898950822943, "step_physics": 0.11855374903206424, "survival_time": 59.99999999999873, "driven_lanedir": 12.896692921840334, "get_state_dump": 0.004837817891650553, "get_robot_state": 0.0037600855148404366, "sim_render-ego0": 0.003980727715853549, "get_duckie_state": 1.3544795713654962e-06, "in-drivable-lane": 26.099999999999305, "deviation-heading": 9.112335216963148, "agent_compute-ego0": 0.09283277811753958, "complete-iteration": 0.26290917297287053, "set_robot_commands": 0.002410208950630334, "distance-from-start": 3.550983288293658, "deviation-center-line": 2.8701858210190867, "driven_lanedir_consec": 12.896692921840334, "sim_compute_sim_state": 0.011459953679728766, "sim_compute_performance-ego0": 0.002000061895924742}}
set_robot_commands_max0.002410208950630334
set_robot_commands_mean0.002410208950630334
set_robot_commands_median0.002410208950630334
set_robot_commands_min0.002410208950630334
sim_compute_performance-ego0_max0.002000061895924742
sim_compute_performance-ego0_mean0.002000061895924742
sim_compute_performance-ego0_median0.002000061895924742
sim_compute_performance-ego0_min0.002000061895924742
sim_compute_sim_state_max0.011459953679728766
sim_compute_sim_state_mean0.011459953679728766
sim_compute_sim_state_median0.011459953679728766
sim_compute_sim_state_min0.011459953679728766
sim_render-ego0_max0.003980727715853549
sim_render-ego0_mean0.003980727715853549
sim_render-ego0_median0.003980727715853549
sim_render-ego0_min0.003980727715853549
simulation-passed1
step_physics_max0.11855374903206424
step_physics_mean0.11855374903206424
step_physics_median0.11855374903206424
step_physics_min0.11855374903206424
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7527613940YU CHENCBC Net v2 test - added mar 31 anomaly + mar 28 bcaido-LF-sim-validationsim-2of4successnogpu-production-spot-0-020:09:40
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median12.60854261449779
survival_time_median59.99999999999873
deviation-center-line_median2.5644220533016635
in-drivable-lane_median28.299999999999553


other stats
agent_compute-ego0_max0.09111656376364624
agent_compute-ego0_mean0.09111656376364624
agent_compute-ego0_median0.09111656376364624
agent_compute-ego0_min0.09111656376364624
complete-iteration_max0.2331961058855652
complete-iteration_mean0.2331961058855652
complete-iteration_median0.2331961058855652
complete-iteration_min0.2331961058855652
deviation-center-line_max2.5644220533016635
deviation-center-line_mean2.5644220533016635
deviation-center-line_min2.5644220533016635
deviation-heading_max13.939072259351835
deviation-heading_mean13.939072259351835
deviation-heading_median13.939072259351835
deviation-heading_min13.939072259351835
distance-from-start_max1.3113309238095552
distance-from-start_mean1.3113309238095552
distance-from-start_median1.3113309238095552
distance-from-start_min1.3113309238095552
driven_any_max24.502788281405977
driven_any_mean24.502788281405977
driven_any_median24.502788281405977
driven_any_min24.502788281405977
driven_lanedir_consec_max12.60854261449779
driven_lanedir_consec_mean12.60854261449779
driven_lanedir_consec_min12.60854261449779
driven_lanedir_max12.60854261449779
driven_lanedir_mean12.60854261449779
driven_lanedir_median12.60854261449779
driven_lanedir_min12.60854261449779
get_duckie_state_max1.288572020772891e-06
get_duckie_state_mean1.288572020772891e-06
get_duckie_state_median1.288572020772891e-06
get_duckie_state_min1.288572020772891e-06
get_robot_state_max0.003757373180913488
get_robot_state_mean0.003757373180913488
get_robot_state_median0.003757373180913488
get_robot_state_min0.003757373180913488
get_state_dump_max0.004717371842942567
get_state_dump_mean0.004717371842942567
get_state_dump_median0.004717371842942567
get_state_dump_min0.004717371842942567
get_ui_image_max0.01868530494982158
get_ui_image_mean0.01868530494982158
get_ui_image_median0.01868530494982158
get_ui_image_min0.01868530494982158
in-drivable-lane_max28.299999999999553
in-drivable-lane_mean28.299999999999553
in-drivable-lane_min28.299999999999553
per-episodes
details{"LF-norm-small_loop-000-ego0": {"driven_any": 24.502788281405977, "get_ui_image": 0.01868530494982158, "step_physics": 0.10100566060417995, "survival_time": 59.99999999999873, "driven_lanedir": 12.60854261449779, "get_state_dump": 0.004717371842942567, "get_robot_state": 0.003757373180913488, "sim_render-ego0": 0.003885491305247234, "get_duckie_state": 1.288572020772891e-06, "in-drivable-lane": 28.299999999999553, "deviation-heading": 13.939072259351835, "agent_compute-ego0": 0.09111656376364624, "complete-iteration": 0.2331961058855652, "set_robot_commands": 0.0023829236217184327, "distance-from-start": 1.3113309238095552, "deviation-center-line": 2.5644220533016635, "driven_lanedir_consec": 12.60854261449779, "sim_compute_sim_state": 0.005601048370285098, "sim_compute_performance-ego0": 0.0019588200476247006}}
set_robot_commands_max0.0023829236217184327
set_robot_commands_mean0.0023829236217184327
set_robot_commands_median0.0023829236217184327
set_robot_commands_min0.0023829236217184327
sim_compute_performance-ego0_max0.0019588200476247006
sim_compute_performance-ego0_mean0.0019588200476247006
sim_compute_performance-ego0_median0.0019588200476247006
sim_compute_performance-ego0_min0.0019588200476247006
sim_compute_sim_state_max0.005601048370285098
sim_compute_sim_state_mean0.005601048370285098
sim_compute_sim_state_median0.005601048370285098
sim_compute_sim_state_min0.005601048370285098
sim_render-ego0_max0.003885491305247234
sim_render-ego0_mean0.003885491305247234
sim_render-ego0_median0.003885491305247234
sim_render-ego0_min0.003885491305247234
simulation-passed1
step_physics_max0.10100566060417995
step_physics_mean0.10100566060417995
step_physics_median0.10100566060417995
step_physics_min0.10100566060417995
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7526913943YU CHENCBC Net v2 test - added mar 31 anomaly + mar 28 bc_v1aido-LF-sim-validationsim-3of4successnogpu-production-spot-0-020:10:57
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median12.125631495199798
survival_time_median59.99999999999873
deviation-center-line_median3.2649580025329143
in-drivable-lane_median24.64999999999951


other stats
agent_compute-ego0_max0.08757934463113472
agent_compute-ego0_mean0.08757934463113472
agent_compute-ego0_median0.08757934463113472
agent_compute-ego0_min0.08757934463113472
complete-iteration_max0.26728750406752816
complete-iteration_mean0.26728750406752816
complete-iteration_median0.26728750406752816
complete-iteration_min0.26728750406752816
deviation-center-line_max3.2649580025329143
deviation-center-line_mean3.2649580025329143
deviation-center-line_min3.2649580025329143
deviation-heading_max11.14684152848446
deviation-heading_mean11.14684152848446
deviation-heading_median11.14684152848446
deviation-heading_min11.14684152848446
distance-from-start_max3.5677832285272637
distance-from-start_mean3.5677832285272637
distance-from-start_median3.5677832285272637
distance-from-start_min3.5677832285272637
driven_any_max23.426843731469873
driven_any_mean23.426843731469873
driven_any_median23.426843731469873
driven_any_min23.426843731469873
driven_lanedir_consec_max12.125631495199798
driven_lanedir_consec_mean12.125631495199798
driven_lanedir_consec_min12.125631495199798
driven_lanedir_max12.125631495199798
driven_lanedir_mean12.125631495199798
driven_lanedir_median12.125631495199798
driven_lanedir_min12.125631495199798
get_duckie_state_max1.3250990970049374e-06
get_duckie_state_mean1.3250990970049374e-06
get_duckie_state_median1.3250990970049374e-06
get_duckie_state_min1.3250990970049374e-06
get_robot_state_max0.003773846693777422
get_robot_state_mean0.003773846693777422
get_robot_state_median0.003773846693777422
get_robot_state_min0.003773846693777422
get_state_dump_max0.004683063786591618
get_state_dump_mean0.004683063786591618
get_state_dump_median0.004683063786591618
get_state_dump_min0.004683063786591618
get_ui_image_max0.024408536588619592
get_ui_image_mean0.024408536588619592
get_ui_image_median0.024408536588619592
get_ui_image_min0.024408536588619592
in-drivable-lane_max24.64999999999951
in-drivable-lane_mean24.64999999999951
in-drivable-lane_min24.64999999999951
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 23.426843731469873, "get_ui_image": 0.024408536588619592, "step_physics": 0.12612872278561302, "survival_time": 59.99999999999873, "driven_lanedir": 12.125631495199798, "get_state_dump": 0.004683063786591618, "get_robot_state": 0.003773846693777422, "sim_render-ego0": 0.004018803421007803, "get_duckie_state": 1.3250990970049374e-06, "in-drivable-lane": 24.64999999999951, "deviation-heading": 11.14684152848446, "agent_compute-ego0": 0.08757934463113472, "complete-iteration": 0.26728750406752816, "set_robot_commands": 0.002403769862344124, "distance-from-start": 3.5677832285272637, "deviation-center-line": 3.2649580025329143, "driven_lanedir_consec": 12.125631495199798, "sim_compute_sim_state": 0.01220598526540942, "sim_compute_performance-ego0": 0.002004027465896543}}
set_robot_commands_max0.002403769862344124
set_robot_commands_mean0.002403769862344124
set_robot_commands_median0.002403769862344124
set_robot_commands_min0.002403769862344124
sim_compute_performance-ego0_max0.002004027465896543
sim_compute_performance-ego0_mean0.002004027465896543
sim_compute_performance-ego0_median0.002004027465896543
sim_compute_performance-ego0_min0.002004027465896543
sim_compute_sim_state_max0.01220598526540942
sim_compute_sim_state_mean0.01220598526540942
sim_compute_sim_state_median0.01220598526540942
sim_compute_sim_state_min0.01220598526540942
sim_render-ego0_max0.004018803421007803
sim_render-ego0_mean0.004018803421007803
sim_render-ego0_median0.004018803421007803
sim_render-ego0_min0.004018803421007803
simulation-passed1
step_physics_max0.12612872278561302
step_physics_mean0.12612872278561302
step_physics_median0.12612872278561302
step_physics_min0.12612872278561302
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7526613946YU CHENCBC Net v2 test - added APR 1 2 times anomaly + mar 28 bc_v1aido-LFP-sim-validationsim-0of4successnogpu-production-spot-0-020:02:21
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median7.79999999999998
in-drivable-lane_median1.849999999999996
driven_lanedir_consec_median1.6142705316890744
deviation-center-line_median0.4969858285005154


other stats
agent_compute-ego0_max0.0954569767994486
agent_compute-ego0_mean0.0954569767994486
agent_compute-ego0_median0.0954569767994486
agent_compute-ego0_min0.0954569767994486
complete-iteration_max0.3171377637583739
complete-iteration_mean0.3171377637583739
complete-iteration_median0.3171377637583739
complete-iteration_min0.3171377637583739
deviation-center-line_max0.4969858285005154
deviation-center-line_mean0.4969858285005154
deviation-center-line_min0.4969858285005154
deviation-heading_max1.7801541038401614
deviation-heading_mean1.7801541038401614
deviation-heading_median1.7801541038401614
deviation-heading_min1.7801541038401614
distance-from-start_max1.861644180012093
distance-from-start_mean1.861644180012093
distance-from-start_median1.861644180012093
distance-from-start_min1.861644180012093
driven_any_max2.5495755655862675
driven_any_mean2.5495755655862675
driven_any_median2.5495755655862675
driven_any_min2.5495755655862675
driven_lanedir_consec_max1.6142705316890744
driven_lanedir_consec_mean1.6142705316890744
driven_lanedir_consec_min1.6142705316890744
driven_lanedir_max1.6142705316890744
driven_lanedir_mean1.6142705316890744
driven_lanedir_median1.6142705316890744
driven_lanedir_min1.6142705316890744
get_duckie_state_max0.021519170445241747
get_duckie_state_mean0.021519170445241747
get_duckie_state_median0.021519170445241747
get_duckie_state_min0.021519170445241747
get_robot_state_max0.003906714688440797
get_robot_state_mean0.003906714688440797
get_robot_state_median0.003906714688440797
get_robot_state_min0.003906714688440797
get_state_dump_max0.008311315706581068
get_state_dump_mean0.008311315706581068
get_state_dump_median0.008311315706581068
get_state_dump_min0.008311315706581068
get_ui_image_max0.025410656716413557
get_ui_image_mean0.025410656716413557
get_ui_image_median0.025410656716413557
get_ui_image_min0.025410656716413557
in-drivable-lane_max1.849999999999996
in-drivable-lane_mean1.849999999999996
in-drivable-lane_min1.849999999999996
per-episodes
details{"LFP-norm-zigzag-000-ego0": {"driven_any": 2.5495755655862675, "get_ui_image": 0.025410656716413557, "step_physics": 0.1390575755173993, "survival_time": 7.79999999999998, "driven_lanedir": 1.6142705316890744, "get_state_dump": 0.008311315706581068, "get_robot_state": 0.003906714688440797, "sim_render-ego0": 0.0041252761889415185, "get_duckie_state": 0.021519170445241747, "in-drivable-lane": 1.849999999999996, "deviation-heading": 1.7801541038401614, "agent_compute-ego0": 0.0954569767994486, "complete-iteration": 0.3171377637583739, "set_robot_commands": 0.0024379165309249976, "distance-from-start": 1.861644180012093, "deviation-center-line": 0.4969858285005154, "driven_lanedir_consec": 1.6142705316890744, "sim_compute_sim_state": 0.014762734151949548, "sim_compute_performance-ego0": 0.0020456602618952467}}
set_robot_commands_max0.0024379165309249976
set_robot_commands_mean0.0024379165309249976
set_robot_commands_median0.0024379165309249976
set_robot_commands_min0.0024379165309249976
sim_compute_performance-ego0_max0.0020456602618952467
sim_compute_performance-ego0_mean0.0020456602618952467
sim_compute_performance-ego0_median0.0020456602618952467
sim_compute_performance-ego0_min0.0020456602618952467
sim_compute_sim_state_max0.014762734151949548
sim_compute_sim_state_mean0.014762734151949548
sim_compute_sim_state_median0.014762734151949548
sim_compute_sim_state_min0.014762734151949548
sim_render-ego0_max0.0041252761889415185
sim_render-ego0_mean0.0041252761889415185
sim_render-ego0_median0.0041252761889415185
sim_render-ego0_min0.0041252761889415185
simulation-passed1
step_physics_max0.1390575755173993
step_physics_mean0.1390575755173993
step_physics_median0.1390575755173993
step_physics_min0.1390575755173993
survival_time_max7.79999999999998
survival_time_mean7.79999999999998
survival_time_min7.79999999999998
No reset possible
7526313946YU CHENCBC Net v2 test - added APR 1 2 times anomaly + mar 28 bc_v1aido-LFP-sim-validationsim-0of4successnogpu-production-spot-0-020:02:28
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median7.649999999999981
in-drivable-lane_median1.0499999999999985
driven_lanedir_consec_median1.8747153594098336
deviation-center-line_median0.4143775001067752


other stats
agent_compute-ego0_max0.09851610970187497
agent_compute-ego0_mean0.09851610970187497
agent_compute-ego0_median0.09851610970187497
agent_compute-ego0_min0.09851610970187497
complete-iteration_max0.33430748171620556
complete-iteration_mean0.33430748171620556
complete-iteration_median0.33430748171620556
complete-iteration_min0.33430748171620556
deviation-center-line_max0.4143775001067752
deviation-center-line_mean0.4143775001067752
deviation-center-line_min0.4143775001067752
deviation-heading_max1.9924558137618669
deviation-heading_mean1.9924558137618669
deviation-heading_median1.9924558137618669
deviation-heading_min1.9924558137618669
distance-from-start_max1.8466903228386105
distance-from-start_mean1.8466903228386105
distance-from-start_median1.8466903228386105
distance-from-start_min1.8466903228386105
driven_any_max2.4632269587842366
driven_any_mean2.4632269587842366
driven_any_median2.4632269587842366
driven_any_min2.4632269587842366
driven_lanedir_consec_max1.8747153594098336
driven_lanedir_consec_mean1.8747153594098336
driven_lanedir_consec_min1.8747153594098336
driven_lanedir_max1.8747153594098336
driven_lanedir_mean1.8747153594098336
driven_lanedir_median1.8747153594098336
driven_lanedir_min1.8747153594098336
get_duckie_state_max0.022503817236268674
get_duckie_state_mean0.022503817236268674
get_duckie_state_median0.022503817236268674
get_duckie_state_min0.022503817236268674
get_robot_state_max0.004091979621292709
get_robot_state_mean0.004091979621292709
get_robot_state_median0.004091979621292709
get_robot_state_min0.004091979621292709
get_state_dump_max0.008833457897235822
get_state_dump_mean0.008833457897235822
get_state_dump_median0.008833457897235822
get_state_dump_min0.008833457897235822
get_ui_image_max0.026026436260768344
get_ui_image_mean0.026026436260768344
get_ui_image_median0.026026436260768344
get_ui_image_min0.026026436260768344
in-drivable-lane_max1.0499999999999985
in-drivable-lane_mean1.0499999999999985
in-drivable-lane_min1.0499999999999985
per-episodes
details{"LFP-norm-zigzag-000-ego0": {"driven_any": 2.4632269587842366, "get_ui_image": 0.026026436260768344, "step_physics": 0.15062133522776813, "survival_time": 7.649999999999981, "driven_lanedir": 1.8747153594098336, "get_state_dump": 0.008833457897235822, "get_robot_state": 0.004091979621292709, "sim_render-ego0": 0.004241777704907702, "get_duckie_state": 0.022503817236268674, "in-drivable-lane": 1.0499999999999985, "deviation-heading": 1.9924558137618669, "agent_compute-ego0": 0.09851610970187497, "complete-iteration": 0.33430748171620556, "set_robot_commands": 0.0025982376816984895, "distance-from-start": 1.8466903228386105, "deviation-center-line": 0.4143775001067752, "driven_lanedir_consec": 1.8747153594098336, "sim_compute_sim_state": 0.014574431753777838, "sim_compute_performance-ego0": 0.002185832370411266}}
set_robot_commands_max0.0025982376816984895
set_robot_commands_mean0.0025982376816984895
set_robot_commands_median0.0025982376816984895
set_robot_commands_min0.0025982376816984895
sim_compute_performance-ego0_max0.002185832370411266
sim_compute_performance-ego0_mean0.002185832370411266
sim_compute_performance-ego0_median0.002185832370411266
sim_compute_performance-ego0_min0.002185832370411266
sim_compute_sim_state_max0.014574431753777838
sim_compute_sim_state_mean0.014574431753777838
sim_compute_sim_state_median0.014574431753777838
sim_compute_sim_state_min0.014574431753777838
sim_render-ego0_max0.004241777704907702
sim_render-ego0_mean0.004241777704907702
sim_render-ego0_median0.004241777704907702
sim_render-ego0_min0.004241777704907702
simulation-passed1
step_physics_max0.15062133522776813
step_physics_mean0.15062133522776813
step_physics_median0.15062133522776813
step_physics_min0.15062133522776813
survival_time_max7.649999999999981
survival_time_mean7.649999999999981
survival_time_min7.649999999999981
No reset possible
7526213965YU CHENCBC Net v2 test - APR 3 BC TFdata + mar 28 anomalyaido-LFP-sim-validationsim-1of4successnogpu-production-spot-0-020:06:49
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median39.649999999999885
in-drivable-lane_median18.299999999999873
driven_lanedir_consec_median7.804391012934344
deviation-center-line_median1.41513097991717


other stats
agent_compute-ego0_max0.09080416189333052
agent_compute-ego0_mean0.09080416189333052
agent_compute-ego0_median0.09080416189333052
agent_compute-ego0_min0.09080416189333052
complete-iteration_max0.2425952729410128
complete-iteration_mean0.2425952729410128
complete-iteration_median0.2425952729410128
complete-iteration_min0.2425952729410128
deviation-center-line_max1.41513097991717
deviation-center-line_mean1.41513097991717
deviation-center-line_min1.41513097991717
deviation-heading_max8.552380489158473
deviation-heading_mean8.552380489158473
deviation-heading_median8.552380489158473
deviation-heading_min8.552380489158473
distance-from-start_max1.2190666015959106
distance-from-start_mean1.2190666015959106
distance-from-start_median1.2190666015959106
distance-from-start_min1.2190666015959106
driven_any_max14.156926734979574
driven_any_mean14.156926734979574
driven_any_median14.156926734979574
driven_any_min14.156926734979574
driven_lanedir_consec_max7.804391012934344
driven_lanedir_consec_mean7.804391012934344
driven_lanedir_consec_min7.804391012934344
driven_lanedir_max7.804391012934344
driven_lanedir_mean7.804391012934344
driven_lanedir_median7.804391012934344
driven_lanedir_min7.804391012934344
get_duckie_state_max0.004429582084156104
get_duckie_state_mean0.004429582084156104
get_duckie_state_median0.004429582084156104
get_duckie_state_min0.004429582084156104
get_robot_state_max0.003825748897619752
get_robot_state_mean0.003825748897619752
get_robot_state_median0.003825748897619752
get_robot_state_min0.003825748897619752
get_state_dump_max0.005678363951387573
get_state_dump_mean0.005678363951387573
get_state_dump_median0.005678363951387573
get_state_dump_min0.005678363951387573
get_ui_image_max0.01869705341925249
get_ui_image_mean0.01869705341925249
get_ui_image_median0.01869705341925249
get_ui_image_min0.01869705341925249
in-drivable-lane_max18.299999999999873
in-drivable-lane_mean18.299999999999873
in-drivable-lane_min18.299999999999873
per-episodes
details{"LFP-norm-small_loop-000-ego0": {"driven_any": 14.156926734979574, "get_ui_image": 0.01869705341925249, "step_physics": 0.10481474801935536, "survival_time": 39.649999999999885, "driven_lanedir": 7.804391012934344, "get_state_dump": 0.005678363951387573, "get_robot_state": 0.003825748897619752, "sim_render-ego0": 0.004027437202876401, "get_duckie_state": 0.004429582084156104, "in-drivable-lane": 18.299999999999873, "deviation-heading": 8.552380489158473, "agent_compute-ego0": 0.09080416189333052, "complete-iteration": 0.2425952729410128, "set_robot_commands": 0.0024730603700921275, "distance-from-start": 1.2190666015959106, "deviation-center-line": 1.41513097991717, "driven_lanedir_consec": 7.804391012934344, "sim_compute_sim_state": 0.005723439776326908, "sim_compute_performance-ego0": 0.0020214136061199968}}
set_robot_commands_max0.0024730603700921275
set_robot_commands_mean0.0024730603700921275
set_robot_commands_median0.0024730603700921275
set_robot_commands_min0.0024730603700921275
sim_compute_performance-ego0_max0.0020214136061199968
sim_compute_performance-ego0_mean0.0020214136061199968
sim_compute_performance-ego0_median0.0020214136061199968
sim_compute_performance-ego0_min0.0020214136061199968
sim_compute_sim_state_max0.005723439776326908
sim_compute_sim_state_mean0.005723439776326908
sim_compute_sim_state_median0.005723439776326908
sim_compute_sim_state_min0.005723439776326908
sim_render-ego0_max0.004027437202876401
sim_render-ego0_mean0.004027437202876401
sim_render-ego0_median0.004027437202876401
sim_render-ego0_min0.004027437202876401
simulation-passed1
step_physics_max0.10481474801935536
step_physics_mean0.10481474801935536
step_physics_median0.10481474801935536
step_physics_min0.10481474801935536
survival_time_max39.649999999999885
survival_time_mean39.649999999999885
survival_time_min39.649999999999885
No reset possible
7525913992Frank (Chude) QianΒ πŸ‡¨πŸ‡¦CBC Net - MixTraining - Expert LF Human LFPaido-LFP-sim-validationsim-3of4successnogpu-production-spot-0-020:01:29
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median4.099999999999993
in-drivable-lane_median0.09999999999999964
driven_lanedir_consec_median1.3521242271056677
deviation-center-line_median0.34852677871533444


other stats
agent_compute-ego0_max0.05626332328980227
agent_compute-ego0_mean0.05626332328980227
agent_compute-ego0_median0.05626332328980227
agent_compute-ego0_min0.05626332328980227
complete-iteration_max0.2591621128909559
complete-iteration_mean0.2591621128909559
complete-iteration_median0.2591621128909559
complete-iteration_min0.2591621128909559
deviation-center-line_max0.34852677871533444
deviation-center-line_mean0.34852677871533444
deviation-center-line_min0.34852677871533444
deviation-heading_max1.126469430910052
deviation-heading_mean1.126469430910052
deviation-heading_median1.126469430910052
deviation-heading_min1.126469430910052
distance-from-start_max1.3324102868087782
distance-from-start_mean1.3324102868087782
distance-from-start_median1.3324102868087782
distance-from-start_min1.3324102868087782
driven_any_max1.4877087166689322
driven_any_mean1.4877087166689322
driven_any_median1.4877087166689322
driven_any_min1.4877087166689322
driven_lanedir_consec_max1.3521242271056677
driven_lanedir_consec_mean1.3521242271056677
driven_lanedir_consec_min1.3521242271056677
driven_lanedir_max1.3521242271056677
driven_lanedir_mean1.3521242271056677
driven_lanedir_median1.3521242271056677
driven_lanedir_min1.3521242271056677
get_duckie_state_max0.02079785013773355
get_duckie_state_mean0.02079785013773355
get_duckie_state_median0.02079785013773355
get_duckie_state_min0.02079785013773355
get_robot_state_max0.003856299871421722
get_robot_state_mean0.003856299871421722
get_robot_state_median0.003856299871421722
get_robot_state_min0.003856299871421722
get_state_dump_max0.008673409381544733
get_state_dump_mean0.008673409381544733
get_state_dump_median0.008673409381544733
get_state_dump_min0.008673409381544733
get_ui_image_max0.02283874477248594
get_ui_image_mean0.02283874477248594
get_ui_image_median0.02283874477248594
get_ui_image_min0.02283874477248594
in-drivable-lane_max0.09999999999999964
in-drivable-lane_mean0.09999999999999964
in-drivable-lane_min0.09999999999999964
per-episodes
details{"LFP-norm-techtrack-000-ego0": {"driven_any": 1.4877087166689322, "get_ui_image": 0.02283874477248594, "step_physics": 0.1268442395221756, "survival_time": 4.099999999999993, "driven_lanedir": 1.3521242271056677, "get_state_dump": 0.008673409381544733, "get_robot_state": 0.003856299871421722, "sim_render-ego0": 0.0039032338613487153, "get_duckie_state": 0.02079785013773355, "in-drivable-lane": 0.09999999999999964, "deviation-heading": 1.126469430910052, "agent_compute-ego0": 0.05626332328980227, "complete-iteration": 0.2591621128909559, "set_robot_commands": 0.0023839760975665355, "distance-from-start": 1.3324102868087782, "deviation-center-line": 0.34852677871533444, "driven_lanedir_consec": 1.3521242271056677, "sim_compute_sim_state": 0.01150508099291698, "sim_compute_performance-ego0": 0.0019970083811196937}}
set_robot_commands_max0.0023839760975665355
set_robot_commands_mean0.0023839760975665355
set_robot_commands_median0.0023839760975665355
set_robot_commands_min0.0023839760975665355
sim_compute_performance-ego0_max0.0019970083811196937
sim_compute_performance-ego0_mean0.0019970083811196937
sim_compute_performance-ego0_median0.0019970083811196937
sim_compute_performance-ego0_min0.0019970083811196937
sim_compute_sim_state_max0.01150508099291698
sim_compute_sim_state_mean0.01150508099291698
sim_compute_sim_state_median0.01150508099291698
sim_compute_sim_state_min0.01150508099291698
sim_render-ego0_max0.0039032338613487153
sim_render-ego0_mean0.0039032338613487153
sim_render-ego0_median0.0039032338613487153
sim_render-ego0_min0.0039032338613487153
simulation-passed1
step_physics_max0.1268442395221756
step_physics_mean0.1268442395221756
step_physics_median0.1268442395221756
step_physics_min0.1268442395221756
survival_time_max4.099999999999993
survival_time_mean4.099999999999993
survival_time_min4.099999999999993
No reset possible
7525613992Frank (Chude) QianΒ πŸ‡¨πŸ‡¦CBC Net - MixTraining - Expert LF Human LFPaido-LFP-sim-validationsim-3of4successnogpu-production-spot-0-020:01:29
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median4.049999999999994
in-drivable-lane_median0.14999999999999947
driven_lanedir_consec_median1.3136955573855866
deviation-center-line_median0.356738646534927


other stats
agent_compute-ego0_max0.05925165153131253
agent_compute-ego0_mean0.05925165153131253
agent_compute-ego0_median0.05925165153131253
agent_compute-ego0_min0.05925165153131253
complete-iteration_max0.2537932425010495
complete-iteration_mean0.2537932425010495
complete-iteration_median0.2537932425010495
complete-iteration_min0.2537932425010495
deviation-center-line_max0.356738646534927
deviation-center-line_mean0.356738646534927
deviation-center-line_min0.356738646534927
deviation-heading_max1.1415790547997708
deviation-heading_mean1.1415790547997708
deviation-heading_median1.1415790547997708
deviation-heading_min1.1415790547997708
distance-from-start_max1.3370134316085396
distance-from-start_mean1.3370134316085396
distance-from-start_median1.3370134316085396
distance-from-start_min1.3370134316085396
driven_any_max1.484929600008034
driven_any_mean1.484929600008034
driven_any_median1.484929600008034
driven_any_min1.484929600008034
driven_lanedir_consec_max1.3136955573855866
driven_lanedir_consec_mean1.3136955573855866
driven_lanedir_consec_min1.3136955573855866
driven_lanedir_max1.3136955573855866
driven_lanedir_mean1.3136955573855866
driven_lanedir_median1.3136955573855866
driven_lanedir_min1.3136955573855866
get_duckie_state_max0.020757704246334913
get_duckie_state_mean0.020757704246334913
get_duckie_state_median0.020757704246334913
get_duckie_state_min0.020757704246334913
get_robot_state_max0.003655294092690073
get_robot_state_mean0.003655294092690073
get_robot_state_median0.003655294092690073
get_robot_state_min0.003655294092690073
get_state_dump_max0.007717818748660204
get_state_dump_mean0.007717818748660204
get_state_dump_median0.007717818748660204
get_state_dump_min0.007717818748660204
get_ui_image_max0.02299552429013136
get_ui_image_mean0.02299552429013136
get_ui_image_median0.02299552429013136
get_ui_image_min0.02299552429013136
in-drivable-lane_max0.14999999999999947
in-drivable-lane_mean0.14999999999999947
in-drivable-lane_min0.14999999999999947
per-episodes
details{"LFP-norm-techtrack-000-ego0": {"driven_any": 1.484929600008034, "get_ui_image": 0.02299552429013136, "step_physics": 0.11953789431874344, "survival_time": 4.049999999999994, "driven_lanedir": 1.3136955573855866, "get_state_dump": 0.007717818748660204, "get_robot_state": 0.003655294092690073, "sim_render-ego0": 0.00392903060447879, "get_duckie_state": 0.020757704246334913, "in-drivable-lane": 0.14999999999999947, "deviation-heading": 1.1415790547997708, "agent_compute-ego0": 0.05925165153131253, "complete-iteration": 0.2537932425010495, "set_robot_commands": 0.0023400899840564262, "distance-from-start": 1.3370134316085396, "deviation-center-line": 0.356738646534927, "driven_lanedir_consec": 1.3136955573855866, "sim_compute_sim_state": 0.011562103178442978, "sim_compute_performance-ego0": 0.0019517235639618664}}
set_robot_commands_max0.0023400899840564262
set_robot_commands_mean0.0023400899840564262
set_robot_commands_median0.0023400899840564262
set_robot_commands_min0.0023400899840564262
sim_compute_performance-ego0_max0.0019517235639618664
sim_compute_performance-ego0_mean0.0019517235639618664
sim_compute_performance-ego0_median0.0019517235639618664
sim_compute_performance-ego0_min0.0019517235639618664
sim_compute_sim_state_max0.011562103178442978
sim_compute_sim_state_mean0.011562103178442978
sim_compute_sim_state_median0.011562103178442978
sim_compute_sim_state_min0.011562103178442978
sim_render-ego0_max0.00392903060447879
sim_render-ego0_mean0.00392903060447879
sim_render-ego0_median0.00392903060447879
sim_render-ego0_min0.00392903060447879
simulation-passed1
step_physics_max0.11953789431874344
step_physics_mean0.11953789431874344
step_physics_median0.11953789431874344
step_physics_min0.11953789431874344
survival_time_max4.049999999999994
survival_time_mean4.049999999999994
survival_time_min4.049999999999994
No reset possible
7525213992Frank (Chude) QianΒ πŸ‡¨πŸ‡¦CBC Net - MixTraining - Expert LF Human LFPaido-LFP-sim-validationsim-1of4successnogpu-production-spot-0-020:02:20
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median10.550000000000017
in-drivable-lane_median3.5500000000000047
driven_lanedir_consec_median2.6657781032937056
deviation-center-line_median0.48973463987753063


other stats
agent_compute-ego0_max0.056661833007380646
agent_compute-ego0_mean0.056661833007380646
agent_compute-ego0_median0.056661833007380646
agent_compute-ego0_min0.056661833007380646
complete-iteration_max0.2087543505542683
complete-iteration_mean0.2087543505542683
complete-iteration_median0.2087543505542683
complete-iteration_min0.2087543505542683
deviation-center-line_max0.48973463987753063
deviation-center-line_mean0.48973463987753063
deviation-center-line_min0.48973463987753063
deviation-heading_max3.155233368520306
deviation-heading_mean3.155233368520306
deviation-heading_median3.155233368520306
deviation-heading_min3.155233368520306
distance-from-start_max1.0655991956451842
distance-from-start_mean1.0655991956451842
distance-from-start_median1.0655991956451842
distance-from-start_min1.0655991956451842
driven_any_max4.737571481861933
driven_any_mean4.737571481861933
driven_any_median4.737571481861933
driven_any_min4.737571481861933
driven_lanedir_consec_max2.6657781032937056
driven_lanedir_consec_mean2.6657781032937056
driven_lanedir_consec_min2.6657781032937056
driven_lanedir_max2.6657781032937056
driven_lanedir_mean2.6657781032937056
driven_lanedir_median2.6657781032937056
driven_lanedir_min2.6657781032937056
get_duckie_state_max0.00434820719485013
get_duckie_state_mean0.00434820719485013
get_duckie_state_median0.00434820719485013
get_duckie_state_min0.00434820719485013
get_robot_state_max0.0038220263877004946
get_robot_state_mean0.0038220263877004946
get_robot_state_median0.0038220263877004946
get_robot_state_min0.0038220263877004946
get_state_dump_max0.005537201773445561
get_state_dump_mean0.005537201773445561
get_state_dump_median0.005537201773445561
get_state_dump_min0.005537201773445561
get_ui_image_max0.019290304408883147
get_ui_image_mean0.019290304408883147
get_ui_image_median0.019290304408883147
get_ui_image_min0.019290304408883147
in-drivable-lane_max3.5500000000000047
in-drivable-lane_mean3.5500000000000047
in-drivable-lane_min3.5500000000000047
per-episodes
details{"LFP-norm-small_loop-000-ego0": {"driven_any": 4.737571481861933, "get_ui_image": 0.019290304408883147, "step_physics": 0.10503799622913576, "survival_time": 10.550000000000017, "driven_lanedir": 2.6657781032937056, "get_state_dump": 0.005537201773445561, "get_robot_state": 0.0038220263877004946, "sim_render-ego0": 0.003970572408640159, "get_duckie_state": 0.00434820719485013, "in-drivable-lane": 3.5500000000000047, "deviation-heading": 3.155233368520306, "agent_compute-ego0": 0.056661833007380646, "complete-iteration": 0.2087543505542683, "set_robot_commands": 0.002384997763723698, "distance-from-start": 1.0655991956451842, "deviation-center-line": 0.48973463987753063, "driven_lanedir_consec": 2.6657781032937056, "sim_compute_sim_state": 0.005603486636899553, "sim_compute_performance-ego0": 0.0020077284776939537}}
set_robot_commands_max0.002384997763723698
set_robot_commands_mean0.002384997763723698
set_robot_commands_median0.002384997763723698
set_robot_commands_min0.002384997763723698
sim_compute_performance-ego0_max0.0020077284776939537
sim_compute_performance-ego0_mean0.0020077284776939537
sim_compute_performance-ego0_median0.0020077284776939537
sim_compute_performance-ego0_min0.0020077284776939537
sim_compute_sim_state_max0.005603486636899553
sim_compute_sim_state_mean0.005603486636899553
sim_compute_sim_state_median0.005603486636899553
sim_compute_sim_state_min0.005603486636899553
sim_render-ego0_max0.003970572408640159
sim_render-ego0_mean0.003970572408640159
sim_render-ego0_median0.003970572408640159
sim_render-ego0_min0.003970572408640159
simulation-passed1
step_physics_max0.10503799622913576
step_physics_mean0.10503799622913576
step_physics_median0.10503799622913576
step_physics_min0.10503799622913576
survival_time_max10.550000000000017
survival_time_mean10.550000000000017
survival_time_min10.550000000000017
No reset possible
7525013992Frank (Chude) QianΒ πŸ‡¨πŸ‡¦CBC Net - MixTraining - Expert LF Human LFPaido-LFP-sim-validationsim-0of4successnogpu-production-spot-0-020:01:23
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median2.8999999999999977
in-drivable-lane_median0.8500000000000006
driven_lanedir_consec_median0.3120726329175951
deviation-center-line_median0.1614197856120933


other stats
agent_compute-ego0_max0.05983676748760676
agent_compute-ego0_mean0.05983676748760676
agent_compute-ego0_median0.05983676748760676
agent_compute-ego0_min0.05983676748760676
complete-iteration_max0.2664307578135345
complete-iteration_mean0.2664307578135345
complete-iteration_median0.2664307578135345
complete-iteration_min0.2664307578135345
deviation-center-line_max0.1614197856120933
deviation-center-line_mean0.1614197856120933
deviation-center-line_min0.1614197856120933
deviation-heading_max1.3693833414330436
deviation-heading_mean1.3693833414330436
deviation-heading_median1.3693833414330436
deviation-heading_min1.3693833414330436
distance-from-start_max0.8240729789389207
distance-from-start_mean0.8240729789389207
distance-from-start_median0.8240729789389207
distance-from-start_min0.8240729789389207
driven_any_max0.861472269819183
driven_any_mean0.861472269819183
driven_any_median0.861472269819183
driven_any_min0.861472269819183
driven_lanedir_consec_max0.3120726329175951
driven_lanedir_consec_mean0.3120726329175951
driven_lanedir_consec_min0.3120726329175951
driven_lanedir_max0.3120726329175951
driven_lanedir_mean0.3120726329175951
driven_lanedir_median0.3120726329175951
driven_lanedir_min0.3120726329175951
get_duckie_state_max0.021208112522707143
get_duckie_state_mean0.021208112522707143
get_duckie_state_median0.021208112522707143
get_duckie_state_min0.021208112522707143
get_robot_state_max0.003876617399312682
get_robot_state_mean0.003876617399312682
get_robot_state_median0.003876617399312682
get_robot_state_min0.003876617399312682
get_state_dump_max0.008392956297276384
get_state_dump_mean0.008392956297276384
get_state_dump_median0.008392956297276384
get_state_dump_min0.008392956297276384
get_ui_image_max0.026418685913085938
get_ui_image_mean0.026418685913085938
get_ui_image_median0.026418685913085938
get_ui_image_min0.026418685913085938
in-drivable-lane_max0.8500000000000006
in-drivable-lane_mean0.8500000000000006
in-drivable-lane_min0.8500000000000006
per-episodes
details{"LFP-norm-zigzag-000-ego0": {"driven_any": 0.861472269819183, "get_ui_image": 0.026418685913085938, "step_physics": 0.12640622914847682, "survival_time": 2.8999999999999977, "driven_lanedir": 0.3120726329175951, "get_state_dump": 0.008392956297276384, "get_robot_state": 0.003876617399312682, "sim_render-ego0": 0.004150228985285355, "get_duckie_state": 0.021208112522707143, "in-drivable-lane": 0.8500000000000006, "deviation-heading": 1.3693833414330436, "agent_compute-ego0": 0.05983676748760676, "complete-iteration": 0.2664307578135345, "set_robot_commands": 0.0025064581531589313, "distance-from-start": 0.8240729789389207, "deviation-center-line": 0.1614197856120933, "driven_lanedir_consec": 0.3120726329175951, "sim_compute_sim_state": 0.011453980106418417, "sim_compute_performance-ego0": 0.002081608368178546}}
set_robot_commands_max0.0025064581531589313
set_robot_commands_mean0.0025064581531589313
set_robot_commands_median0.0025064581531589313
set_robot_commands_min0.0025064581531589313
sim_compute_performance-ego0_max0.002081608368178546
sim_compute_performance-ego0_mean0.002081608368178546
sim_compute_performance-ego0_median0.002081608368178546
sim_compute_performance-ego0_min0.002081608368178546
sim_compute_sim_state_max0.011453980106418417
sim_compute_sim_state_mean0.011453980106418417
sim_compute_sim_state_median0.011453980106418417
sim_compute_sim_state_min0.011453980106418417
sim_render-ego0_max0.004150228985285355
sim_render-ego0_mean0.004150228985285355
sim_render-ego0_median0.004150228985285355
sim_render-ego0_min0.004150228985285355
simulation-passed1
step_physics_max0.12640622914847682
step_physics_mean0.12640622914847682
step_physics_median0.12640622914847682
step_physics_min0.12640622914847682
survival_time_max2.8999999999999977
survival_time_mean2.8999999999999977
survival_time_min2.8999999999999977
No reset possible
7524513994Frank (Chude) QianΒ πŸ‡¨πŸ‡¦CBC Net - MixTraining - Expert LF Human LFP - Best Lossaido-LFP-sim-validationsim-3of4successnogpu-production-spot-0-020:03:35
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median14.100000000000064
in-drivable-lane_median3.8000000000000327
driven_lanedir_consec_median3.977202534842343
deviation-center-line_median0.858270432325888


other stats
agent_compute-ego0_max0.056245035501756434
agent_compute-ego0_mean0.056245035501756434
agent_compute-ego0_median0.056245035501756434
agent_compute-ego0_min0.056245035501756434
complete-iteration_max0.26898570785252873
complete-iteration_mean0.26898570785252873
complete-iteration_median0.26898570785252873
complete-iteration_min0.26898570785252873
deviation-center-line_max0.858270432325888
deviation-center-line_mean0.858270432325888
deviation-center-line_min0.858270432325888
deviation-heading_max3.5392107100415853
deviation-heading_mean3.5392107100415853
deviation-heading_median3.5392107100415853
deviation-heading_min3.5392107100415853
distance-from-start_max2.1668456371550406
distance-from-start_mean2.1668456371550406
distance-from-start_median2.1668456371550406
distance-from-start_min2.1668456371550406
driven_any_max5.685266929463761
driven_any_mean5.685266929463761
driven_any_median5.685266929463761
driven_any_min5.685266929463761
driven_lanedir_consec_max3.977202534842343
driven_lanedir_consec_mean3.977202534842343
driven_lanedir_consec_min3.977202534842343
driven_lanedir_max3.977202534842343
driven_lanedir_mean3.977202534842343
driven_lanedir_median3.977202534842343
driven_lanedir_min3.977202534842343
get_duckie_state_max0.021362771415036473
get_duckie_state_mean0.021362771415036473
get_duckie_state_median0.021362771415036473
get_duckie_state_min0.021362771415036473
get_robot_state_max0.003881836948462173
get_robot_state_mean0.003881836948462173
get_robot_state_median0.003881836948462173
get_robot_state_min0.003881836948462173
get_state_dump_max0.008321602015950233
get_state_dump_mean0.008321602015950233
get_state_dump_median0.008321602015950233
get_state_dump_min0.008321602015950233
get_ui_image_max0.0243888137197326
get_ui_image_mean0.0243888137197326
get_ui_image_median0.0243888137197326
get_ui_image_min0.0243888137197326
in-drivable-lane_max3.8000000000000327
in-drivable-lane_mean3.8000000000000327
in-drivable-lane_min3.8000000000000327
per-episodes
details{"LFP-norm-techtrack-000-ego0": {"driven_any": 5.685266929463761, "get_ui_image": 0.0243888137197326, "step_physics": 0.13353931271987754, "survival_time": 14.100000000000064, "driven_lanedir": 3.977202534842343, "get_state_dump": 0.008321602015950233, "get_robot_state": 0.003881836948462173, "sim_render-ego0": 0.004043122483647754, "get_duckie_state": 0.021362771415036473, "in-drivable-lane": 3.8000000000000327, "deviation-heading": 3.5392107100415853, "agent_compute-ego0": 0.056245035501756434, "complete-iteration": 0.26898570785252873, "set_robot_commands": 0.0024215088295009867, "distance-from-start": 2.1668456371550406, "deviation-center-line": 0.858270432325888, "driven_lanedir_consec": 3.977202534842343, "sim_compute_sim_state": 0.012664590202034996, "sim_compute_performance-ego0": 0.0020177979351353728}}
set_robot_commands_max0.0024215088295009867
set_robot_commands_mean0.0024215088295009867
set_robot_commands_median0.0024215088295009867
set_robot_commands_min0.0024215088295009867
sim_compute_performance-ego0_max0.0020177979351353728
sim_compute_performance-ego0_mean0.0020177979351353728
sim_compute_performance-ego0_median0.0020177979351353728
sim_compute_performance-ego0_min0.0020177979351353728
sim_compute_sim_state_max0.012664590202034996
sim_compute_sim_state_mean0.012664590202034996
sim_compute_sim_state_median0.012664590202034996
sim_compute_sim_state_min0.012664590202034996
sim_render-ego0_max0.004043122483647754
sim_render-ego0_mean0.004043122483647754
sim_render-ego0_median0.004043122483647754
sim_render-ego0_min0.004043122483647754
simulation-passed1
step_physics_max0.13353931271987754
step_physics_mean0.13353931271987754
step_physics_median0.13353931271987754
step_physics_min0.13353931271987754
survival_time_max14.100000000000064
survival_time_mean14.100000000000064
survival_time_min14.100000000000064
No reset possible
7524113994Frank (Chude) QianΒ πŸ‡¨πŸ‡¦CBC Net - MixTraining - Expert LF Human LFP - Best Lossaido-LFP-sim-validationsim-2of4successnogpu-production-spot-0-020:01:54
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median6.449999999999985
in-drivable-lane_median0.7999999999999972
driven_lanedir_consec_median2.113970998254086
deviation-center-line_median0.4149616271523358


other stats
agent_compute-ego0_max0.05685097804436317
agent_compute-ego0_mean0.05685097804436317
agent_compute-ego0_median0.05685097804436317
agent_compute-ego0_min0.05685097804436317
complete-iteration_max0.24428294621981103
complete-iteration_mean0.24428294621981103
complete-iteration_median0.24428294621981103
complete-iteration_min0.24428294621981103
deviation-center-line_max0.4149616271523358
deviation-center-line_mean0.4149616271523358
deviation-center-line_min0.4149616271523358
deviation-heading_max1.6191179183167697
deviation-heading_mean1.6191179183167697
deviation-heading_median1.6191179183167697
deviation-heading_min1.6191179183167697
distance-from-start_max2.0608600193306104
distance-from-start_mean2.0608600193306104
distance-from-start_median2.0608600193306104
distance-from-start_min2.0608600193306104
driven_any_max2.6343530373217594
driven_any_mean2.6343530373217594
driven_any_median2.6343530373217594
driven_any_min2.6343530373217594
driven_lanedir_consec_max2.113970998254086
driven_lanedir_consec_mean2.113970998254086
driven_lanedir_consec_min2.113970998254086
driven_lanedir_max2.113970998254086
driven_lanedir_mean2.113970998254086
driven_lanedir_median2.113970998254086
driven_lanedir_min2.113970998254086
get_duckie_state_max0.025424982951237605
get_duckie_state_mean0.025424982951237605
get_duckie_state_median0.025424982951237605
get_duckie_state_min0.025424982951237605
get_robot_state_max0.003847918143639198
get_robot_state_mean0.003847918143639198
get_robot_state_median0.003847918143639198
get_robot_state_min0.003847918143639198
get_state_dump_max0.009199991592994105
get_state_dump_mean0.009199991592994105
get_state_dump_median0.009199991592994105
get_state_dump_min0.009199991592994105
get_ui_image_max0.02077050575843224
get_ui_image_mean0.02077050575843224
get_ui_image_median0.02077050575843224
get_ui_image_min0.02077050575843224
in-drivable-lane_max0.7999999999999972
in-drivable-lane_mean0.7999999999999972
in-drivable-lane_min0.7999999999999972
per-episodes
details{"LFP-norm-loop-000-ego0": {"driven_any": 2.6343530373217594, "get_ui_image": 0.02077050575843224, "step_physics": 0.1103177235676692, "survival_time": 6.449999999999985, "driven_lanedir": 2.113970998254086, "get_state_dump": 0.009199991592994105, "get_robot_state": 0.003847918143639198, "sim_render-ego0": 0.004056528898385855, "get_duckie_state": 0.025424982951237605, "in-drivable-lane": 0.7999999999999972, "deviation-heading": 1.6191179183167697, "agent_compute-ego0": 0.05685097804436317, "complete-iteration": 0.24428294621981103, "set_robot_commands": 0.0024242437802828275, "distance-from-start": 2.0608600193306104, "deviation-center-line": 0.4149616271523358, "driven_lanedir_consec": 2.113970998254086, "sim_compute_sim_state": 0.009230666894179123, "sim_compute_performance-ego0": 0.0020545024138230545}}
set_robot_commands_max0.0024242437802828275
set_robot_commands_mean0.0024242437802828275
set_robot_commands_median0.0024242437802828275
set_robot_commands_min0.0024242437802828275
sim_compute_performance-ego0_max0.0020545024138230545
sim_compute_performance-ego0_mean0.0020545024138230545
sim_compute_performance-ego0_median0.0020545024138230545
sim_compute_performance-ego0_min0.0020545024138230545
sim_compute_sim_state_max0.009230666894179123
sim_compute_sim_state_mean0.009230666894179123
sim_compute_sim_state_median0.009230666894179123
sim_compute_sim_state_min0.009230666894179123
sim_render-ego0_max0.004056528898385855
sim_render-ego0_mean0.004056528898385855
sim_render-ego0_median0.004056528898385855
sim_render-ego0_min0.004056528898385855
simulation-passed1
step_physics_max0.1103177235676692
step_physics_mean0.1103177235676692
step_physics_median0.1103177235676692
step_physics_min0.1103177235676692
survival_time_max6.449999999999985
survival_time_mean6.449999999999985
survival_time_min6.449999999999985
No reset possible
7523813996Frank (Chude) QianΒ πŸ‡¨πŸ‡¦baseline-behavior-cloningaido-LFP-sim-validationsim-2of4host-errornogpu-production-spot-0-020:00:42
The container "solut [...]
The container "solution-ego0" exited with code 139.


Error code 139 means GPU memory error.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7523713996Frank (Chude) QianΒ πŸ‡¨πŸ‡¦baseline-behavior-cloningaido-LFP-sim-validationsim-2of4host-errornogpu-production-spot-0-020:00:38
The container "solut [...]
The container "solution-ego0" exited with code 139.


Error code 139 means GPU memory error.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7523413996Frank (Chude) QianΒ πŸ‡¨πŸ‡¦baseline-behavior-cloningaido-LFP-sim-validationsim-0of4host-errornogpu-production-spot-0-020:00:43
The container "solut [...]
The container "solution-ego0" exited with code 139.


Error code 139 means GPU memory error.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7522313945YU CHENCBC Net v2 test - added APR 1 2 times anomaly + mar 28 bc_v1aido-LF-sim-validationsim-2of4successnogpu-production-spot-0-020:10:02
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median14.797548314925423
survival_time_median59.99999999999873
deviation-center-line_median3.368157835437445
in-drivable-lane_median15.69999999999964


other stats
agent_compute-ego0_max0.09488394298124672
agent_compute-ego0_mean0.09488394298124672
agent_compute-ego0_median0.09488394298124672
agent_compute-ego0_min0.09488394298124672
complete-iteration_max0.245722577335634
complete-iteration_mean0.245722577335634
complete-iteration_median0.245722577335634
complete-iteration_min0.245722577335634
deviation-center-line_max3.368157835437445
deviation-center-line_mean3.368157835437445
deviation-center-line_min3.368157835437445
deviation-heading_max19.90171599002572
deviation-heading_mean19.90171599002572
deviation-heading_median19.90171599002572
deviation-heading_min19.90171599002572
distance-from-start_max1.2949678745677855
distance-from-start_mean1.2949678745677855
distance-from-start_median1.2949678745677855
distance-from-start_min1.2949678745677855
driven_any_max22.31836835204288
driven_any_mean22.31836835204288
driven_any_median22.31836835204288
driven_any_min22.31836835204288
driven_lanedir_consec_max14.797548314925423
driven_lanedir_consec_mean14.797548314925423
driven_lanedir_consec_min14.797548314925423
driven_lanedir_max14.797548314925423
driven_lanedir_mean14.797548314925423
driven_lanedir_median14.797548314925423
driven_lanedir_min14.797548314925423
get_duckie_state_max1.7201473671232632e-06
get_duckie_state_mean1.7201473671232632e-06
get_duckie_state_median1.7201473671232632e-06
get_duckie_state_min1.7201473671232632e-06
get_robot_state_max0.004060301752908343
get_robot_state_mean0.004060301752908343
get_robot_state_median0.004060301752908343
get_robot_state_min0.004060301752908343
get_state_dump_max0.005073975762360102
get_state_dump_mean0.005073975762360102
get_state_dump_median0.005073975762360102
get_state_dump_min0.005073975762360102
get_ui_image_max0.019323670794624374
get_ui_image_mean0.019323670794624374
get_ui_image_median0.019323670794624374
get_ui_image_min0.019323670794624374
in-drivable-lane_max15.69999999999964
in-drivable-lane_mean15.69999999999964
in-drivable-lane_min15.69999999999964
per-episodes
details{"LF-norm-small_loop-000-ego0": {"driven_any": 22.31836835204288, "get_ui_image": 0.019323670794624374, "step_physics": 0.10742941069464004, "survival_time": 59.99999999999873, "driven_lanedir": 14.797548314925423, "get_state_dump": 0.005073975762360102, "get_robot_state": 0.004060301752908343, "sim_render-ego0": 0.004131769161240246, "get_duckie_state": 1.7201473671232632e-06, "in-drivable-lane": 15.69999999999964, "deviation-heading": 19.90171599002572, "agent_compute-ego0": 0.09488394298124672, "complete-iteration": 0.245722577335634, "set_robot_commands": 0.0026230869642602317, "distance-from-start": 1.2949678745677855, "deviation-center-line": 3.368157835437445, "driven_lanedir_consec": 14.797548314925423, "sim_compute_sim_state": 0.005932819833366401, "sim_compute_performance-ego0": 0.002161460355556974}}
set_robot_commands_max0.0026230869642602317
set_robot_commands_mean0.0026230869642602317
set_robot_commands_median0.0026230869642602317
set_robot_commands_min0.0026230869642602317
sim_compute_performance-ego0_max0.002161460355556974
sim_compute_performance-ego0_mean0.002161460355556974
sim_compute_performance-ego0_median0.002161460355556974
sim_compute_performance-ego0_min0.002161460355556974
sim_compute_sim_state_max0.005932819833366401
sim_compute_sim_state_mean0.005932819833366401
sim_compute_sim_state_median0.005932819833366401
sim_compute_sim_state_min0.005932819833366401
sim_render-ego0_max0.004131769161240246
sim_render-ego0_mean0.004131769161240246
sim_render-ego0_median0.004131769161240246
sim_render-ego0_min0.004131769161240246
simulation-passed1
step_physics_max0.10742941069464004
step_physics_mean0.10742941069464004
step_physics_median0.10742941069464004
step_physics_min0.10742941069464004
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7521613964YU CHENCBC Net v2 test - APR 3 BC TFdata + mar 28 anomalyaido-LF-sim-validationsim-3of4successnogpu-production-spot-0-020:11:28
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median14.217342710689998
survival_time_median59.99999999999873
deviation-center-line_median3.4395558600641936
in-drivable-lane_median18.49999999999961


other stats
agent_compute-ego0_max0.0953143435056561
agent_compute-ego0_mean0.0953143435056561
agent_compute-ego0_median0.0953143435056561
agent_compute-ego0_min0.0953143435056561
complete-iteration_max0.2911535992808981
complete-iteration_mean0.2911535992808981
complete-iteration_median0.2911535992808981
complete-iteration_min0.2911535992808981
deviation-center-line_max3.4395558600641936
deviation-center-line_mean3.4395558600641936
deviation-center-line_min3.4395558600641936
deviation-heading_max12.376896693402358
deviation-heading_mean12.376896693402358
deviation-heading_median12.376896693402358
deviation-heading_min12.376896693402358
distance-from-start_max3.633107573954118
distance-from-start_mean3.633107573954118
distance-from-start_median3.633107573954118
distance-from-start_min3.633107573954118
driven_any_max22.499933111225953
driven_any_mean22.499933111225953
driven_any_median22.499933111225953
driven_any_min22.499933111225953
driven_lanedir_consec_max14.217342710689998
driven_lanedir_consec_mean14.217342710689998
driven_lanedir_consec_min14.217342710689998
driven_lanedir_max14.217342710689998
driven_lanedir_mean14.217342710689998
driven_lanedir_median14.217342710689998
driven_lanedir_min14.217342710689998
get_duckie_state_max2.6450367593249114e-06
get_duckie_state_mean2.6450367593249114e-06
get_duckie_state_median2.6450367593249114e-06
get_duckie_state_min2.6450367593249114e-06
get_robot_state_max0.003966183586978198
get_robot_state_mean0.003966183586978198
get_robot_state_median0.003966183586978198
get_robot_state_min0.003966183586978198
get_state_dump_max0.0049416042188125085
get_state_dump_mean0.0049416042188125085
get_state_dump_median0.0049416042188125085
get_state_dump_min0.0049416042188125085
get_ui_image_max0.02507905757595955
get_ui_image_mean0.02507905757595955
get_ui_image_median0.02507905757595955
get_ui_image_min0.02507905757595955
in-drivable-lane_max18.49999999999961
in-drivable-lane_mean18.49999999999961
in-drivable-lane_min18.49999999999961
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 22.499933111225953, "get_ui_image": 0.02507905757595955, "step_physics": 0.13988662937300886, "survival_time": 59.99999999999873, "driven_lanedir": 14.217342710689998, "get_state_dump": 0.0049416042188125085, "get_robot_state": 0.003966183586978198, "sim_render-ego0": 0.004184517832620257, "get_duckie_state": 2.6450367593249114e-06, "in-drivable-lane": 18.49999999999961, "deviation-heading": 12.376896693402358, "agent_compute-ego0": 0.0953143435056561, "complete-iteration": 0.2911535992808981, "set_robot_commands": 0.0025166500418708285, "distance-from-start": 3.633107573954118, "deviation-center-line": 3.4395558600641936, "driven_lanedir_consec": 14.217342710689998, "sim_compute_sim_state": 0.012976187849719757, "sim_compute_performance-ego0": 0.0021888552656975716}}
set_robot_commands_max0.0025166500418708285
set_robot_commands_mean0.0025166500418708285
set_robot_commands_median0.0025166500418708285
set_robot_commands_min0.0025166500418708285
sim_compute_performance-ego0_max0.0021888552656975716
sim_compute_performance-ego0_mean0.0021888552656975716
sim_compute_performance-ego0_median0.0021888552656975716
sim_compute_performance-ego0_min0.0021888552656975716
sim_compute_sim_state_max0.012976187849719757
sim_compute_sim_state_mean0.012976187849719757
sim_compute_sim_state_median0.012976187849719757
sim_compute_sim_state_min0.012976187849719757
sim_render-ego0_max0.004184517832620257
sim_render-ego0_mean0.004184517832620257
sim_render-ego0_median0.004184517832620257
sim_render-ego0_min0.004184517832620257
simulation-passed1
step_physics_max0.13988662937300886
step_physics_mean0.13988662937300886
step_physics_median0.13988662937300886
step_physics_min0.13988662937300886
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7521213991Frank (Chude) QianΒ πŸ‡¨πŸ‡¦CBC Net - MixTraining - Expert LF Human LFPaido-LF-sim-validationsim-3of4successnogpu-production-spot-0-020:02:26
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median2.262229091527575
survival_time_median10.050000000000008
deviation-center-line_median0.4163704661040507
in-drivable-lane_median4.050000000000016


other stats
agent_compute-ego0_max0.057897826232532465
agent_compute-ego0_mean0.057897826232532465
agent_compute-ego0_median0.057897826232532465
agent_compute-ego0_min0.057897826232532465
complete-iteration_max0.2426367466992671
complete-iteration_mean0.2426367466992671
complete-iteration_median0.2426367466992671
complete-iteration_min0.2426367466992671
deviation-center-line_max0.4163704661040507
deviation-center-line_mean0.4163704661040507
deviation-center-line_min0.4163704661040507
deviation-heading_max2.427905563279131
deviation-heading_mean2.427905563279131
deviation-heading_median2.427905563279131
deviation-heading_min2.427905563279131
distance-from-start_max2.1824948355259433
distance-from-start_mean2.1824948355259433
distance-from-start_median2.1824948355259433
distance-from-start_min2.1824948355259433
driven_any_max4.182089944581996
driven_any_mean4.182089944581996
driven_any_median4.182089944581996
driven_any_min4.182089944581996
driven_lanedir_consec_max2.262229091527575
driven_lanedir_consec_mean2.262229091527575
driven_lanedir_consec_min2.262229091527575
driven_lanedir_max2.262229091527575
driven_lanedir_mean2.262229091527575
driven_lanedir_median2.262229091527575
driven_lanedir_min2.262229091527575
get_duckie_state_max1.2853358051564434e-06
get_duckie_state_mean1.2853358051564434e-06
get_duckie_state_median1.2853358051564434e-06
get_duckie_state_min1.2853358051564434e-06
get_robot_state_max0.003811145773028383
get_robot_state_mean0.003811145773028383
get_robot_state_median0.003811145773028383
get_robot_state_min0.003811145773028383
get_state_dump_max0.004891355438987807
get_state_dump_mean0.004891355438987807
get_state_dump_median0.004891355438987807
get_state_dump_min0.004891355438987807
get_ui_image_max0.02408968576110235
get_ui_image_mean0.02408968576110235
get_ui_image_median0.02408968576110235
get_ui_image_min0.02408968576110235
in-drivable-lane_max4.050000000000016
in-drivable-lane_mean4.050000000000016
in-drivable-lane_min4.050000000000016
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 4.182089944581996, "get_ui_image": 0.02408968576110235, "step_physics": 0.13342894421945703, "survival_time": 10.050000000000008, "driven_lanedir": 2.262229091527575, "get_state_dump": 0.004891355438987807, "get_robot_state": 0.003811145773028383, "sim_render-ego0": 0.004091763260340927, "get_duckie_state": 1.2853358051564434e-06, "in-drivable-lane": 4.050000000000016, "deviation-heading": 2.427905563279131, "agent_compute-ego0": 0.057897826232532465, "complete-iteration": 0.2426367466992671, "set_robot_commands": 0.002366035291464022, "distance-from-start": 2.1824948355259433, "deviation-center-line": 0.4163704661040507, "driven_lanedir_consec": 2.262229091527575, "sim_compute_sim_state": 0.009947786236753558, "sim_compute_performance-ego0": 0.0020161548463424835}}
set_robot_commands_max0.002366035291464022
set_robot_commands_mean0.002366035291464022
set_robot_commands_median0.002366035291464022
set_robot_commands_min0.002366035291464022
sim_compute_performance-ego0_max0.0020161548463424835
sim_compute_performance-ego0_mean0.0020161548463424835
sim_compute_performance-ego0_median0.0020161548463424835
sim_compute_performance-ego0_min0.0020161548463424835
sim_compute_sim_state_max0.009947786236753558
sim_compute_sim_state_mean0.009947786236753558
sim_compute_sim_state_median0.009947786236753558
sim_compute_sim_state_min0.009947786236753558
sim_render-ego0_max0.004091763260340927
sim_render-ego0_mean0.004091763260340927
sim_render-ego0_median0.004091763260340927
sim_render-ego0_min0.004091763260340927
simulation-passed1
step_physics_max0.13342894421945703
step_physics_mean0.13342894421945703
step_physics_median0.13342894421945703
step_physics_min0.13342894421945703
survival_time_max10.050000000000008
survival_time_mean10.050000000000008
survival_time_min10.050000000000008
No reset possible
7520213995Frank (Chude) QianΒ πŸ‡¨πŸ‡¦baseline-behavior-cloningaido-LF-sim-validationsim-0of4successnogpu-production-spot-0-020:09:00
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median12.615610256467493
survival_time_median59.99999999999873
deviation-center-line_median2.9173370843753967
in-drivable-lane_median5.549999999999881


other stats
agent_compute-ego0_max0.04099513549391773
agent_compute-ego0_mean0.04099513549391773
agent_compute-ego0_median0.04099513549391773
agent_compute-ego0_min0.04099513549391773
complete-iteration_max0.18406356185004671
complete-iteration_mean0.18406356185004671
complete-iteration_median0.18406356185004671
complete-iteration_min0.18406356185004671
deviation-center-line_max2.9173370843753967
deviation-center-line_mean2.9173370843753967
deviation-center-line_min2.9173370843753967
deviation-heading_max12.45076186346992
deviation-heading_mean12.45076186346992
deviation-heading_median12.45076186346992
deviation-heading_min12.45076186346992
distance-from-start_max2.8791858824707286
distance-from-start_mean2.8791858824707286
distance-from-start_median2.8791858824707286
distance-from-start_min2.8791858824707286
driven_any_max14.590098009138794
driven_any_mean14.590098009138794
driven_any_median14.590098009138794
driven_any_min14.590098009138794
driven_lanedir_consec_max12.615610256467493
driven_lanedir_consec_mean12.615610256467493
driven_lanedir_consec_min12.615610256467493
driven_lanedir_max12.615610256467493
driven_lanedir_mean12.615610256467493
driven_lanedir_median12.615610256467493
driven_lanedir_min12.615610256467493
get_duckie_state_max1.4962205084833276e-06
get_duckie_state_mean1.4962205084833276e-06
get_duckie_state_median1.4962205084833276e-06
get_duckie_state_min1.4962205084833276e-06
get_robot_state_max0.004088599120051935
get_robot_state_mean0.004088599120051935
get_robot_state_median0.004088599120051935
get_robot_state_min0.004088599120051935
get_state_dump_max0.005057725183771214
get_state_dump_mean0.005057725183771214
get_state_dump_median0.005057725183771214
get_state_dump_min0.005057725183771214
get_ui_image_max0.02023488814189571
get_ui_image_mean0.02023488814189571
get_ui_image_median0.02023488814189571
get_ui_image_min0.02023488814189571
in-drivable-lane_max5.549999999999881
in-drivable-lane_mean5.549999999999881
in-drivable-lane_min5.549999999999881
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 14.590098009138794, "get_ui_image": 0.02023488814189571, "step_physics": 0.09606503666092414, "survival_time": 59.99999999999873, "driven_lanedir": 12.615610256467493, "get_state_dump": 0.005057725183771214, "get_robot_state": 0.004088599120051935, "sim_render-ego0": 0.004201989090512138, "get_duckie_state": 1.4962205084833276e-06, "in-drivable-lane": 5.549999999999881, "deviation-heading": 12.45076186346992, "agent_compute-ego0": 0.04099513549391773, "complete-iteration": 0.18406356185004671, "set_robot_commands": 0.002469105287753573, "distance-from-start": 2.8791858824707286, "deviation-center-line": 2.9173370843753967, "driven_lanedir_consec": 12.615610256467493, "sim_compute_sim_state": 0.00868381846457298, "sim_compute_performance-ego0": 0.002166834203130895}}
set_robot_commands_max0.002469105287753573
set_robot_commands_mean0.002469105287753573
set_robot_commands_median0.002469105287753573
set_robot_commands_min0.002469105287753573
sim_compute_performance-ego0_max0.002166834203130895
sim_compute_performance-ego0_mean0.002166834203130895
sim_compute_performance-ego0_median0.002166834203130895
sim_compute_performance-ego0_min0.002166834203130895
sim_compute_sim_state_max0.00868381846457298
sim_compute_sim_state_mean0.00868381846457298
sim_compute_sim_state_median0.00868381846457298
sim_compute_sim_state_min0.00868381846457298
sim_render-ego0_max0.004201989090512138
sim_render-ego0_mean0.004201989090512138
sim_render-ego0_median0.004201989090512138
sim_render-ego0_min0.004201989090512138
simulation-passed1
step_physics_max0.09606503666092414
step_physics_mean0.09606503666092414
step_physics_median0.09606503666092414
step_physics_min0.09606503666092414
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7520113997Frank (Chude) QianΒ πŸ‡¨πŸ‡¦baseline-behavior-cloning New Datasetaido-LF-sim-validationsim-0of4successnogpu-production-spot-0-020:01:31
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.6843932516675731
survival_time_median4.249999999999993
deviation-center-line_median0.07660300176021982
in-drivable-lane_median2.499999999999992


other stats
agent_compute-ego0_max0.05965434395989706
agent_compute-ego0_mean0.05965434395989706
agent_compute-ego0_median0.05965434395989706
agent_compute-ego0_min0.05965434395989706
complete-iteration_max0.182100398595943
complete-iteration_mean0.182100398595943
complete-iteration_median0.182100398595943
complete-iteration_min0.182100398595943
deviation-center-line_max0.07660300176021982
deviation-center-line_mean0.07660300176021982
deviation-center-line_min0.07660300176021982
deviation-heading_max0.37684726120114265
deviation-heading_mean0.37684726120114265
deviation-heading_median0.37684726120114265
deviation-heading_min0.37684726120114265
distance-from-start_max1.6686818416121934
distance-from-start_mean1.6686818416121934
distance-from-start_median1.6686818416121934
distance-from-start_min1.6686818416121934
driven_any_max1.6692813311138313
driven_any_mean1.6692813311138313
driven_any_median1.6692813311138313
driven_any_min1.6692813311138313
driven_lanedir_consec_max0.6843932516675731
driven_lanedir_consec_mean0.6843932516675731
driven_lanedir_consec_min0.6843932516675731
driven_lanedir_max0.6843932516675731
driven_lanedir_mean0.6843932516675731
driven_lanedir_median0.6843932516675731
driven_lanedir_min0.6843932516675731
get_duckie_state_max1.6633854355922965e-06
get_duckie_state_mean1.6633854355922965e-06
get_duckie_state_median1.6633854355922965e-06
get_duckie_state_min1.6633854355922965e-06
get_robot_state_max0.004057579262312068
get_robot_state_mean0.004057579262312068
get_robot_state_median0.004057579262312068
get_robot_state_min0.004057579262312068
get_state_dump_max0.0051761782446572945
get_state_dump_mean0.0051761782446572945
get_state_dump_median0.0051761782446572945
get_state_dump_min0.0051761782446572945
get_ui_image_max0.0205080509185791
get_ui_image_mean0.0205080509185791
get_ui_image_median0.0205080509185791
get_ui_image_min0.0205080509185791
in-drivable-lane_max2.499999999999992
in-drivable-lane_mean2.499999999999992
in-drivable-lane_min2.499999999999992
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 1.6692813311138313, "get_ui_image": 0.0205080509185791, "step_physics": 0.07451153355975483, "survival_time": 4.249999999999993, "driven_lanedir": 0.6843932516675731, "get_state_dump": 0.0051761782446572945, "get_robot_state": 0.004057579262312068, "sim_render-ego0": 0.004298814507417901, "get_duckie_state": 1.6633854355922965e-06, "in-drivable-lane": 2.499999999999992, "deviation-heading": 0.37684726120114265, "agent_compute-ego0": 0.05965434395989706, "complete-iteration": 0.182100398595943, "set_robot_commands": 0.0024725226468818133, "distance-from-start": 1.6686818416121934, "deviation-center-line": 0.07660300176021982, "driven_lanedir_consec": 0.6843932516675731, "sim_compute_sim_state": 0.00911985718926718, "sim_compute_performance-ego0": 0.0021959737289783568}}
set_robot_commands_max0.0024725226468818133
set_robot_commands_mean0.0024725226468818133
set_robot_commands_median0.0024725226468818133
set_robot_commands_min0.0024725226468818133
sim_compute_performance-ego0_max0.0021959737289783568
sim_compute_performance-ego0_mean0.0021959737289783568
sim_compute_performance-ego0_median0.0021959737289783568
sim_compute_performance-ego0_min0.0021959737289783568
sim_compute_sim_state_max0.00911985718926718
sim_compute_sim_state_mean0.00911985718926718
sim_compute_sim_state_median0.00911985718926718
sim_compute_sim_state_min0.00911985718926718
sim_render-ego0_max0.004298814507417901
sim_render-ego0_mean0.004298814507417901
sim_render-ego0_median0.004298814507417901
sim_render-ego0_min0.004298814507417901
simulation-passed1
step_physics_max0.07451153355975483
step_physics_mean0.07451153355975483
step_physics_median0.07451153355975483
step_physics_min0.07451153355975483
survival_time_max4.249999999999993
survival_time_mean4.249999999999993
survival_time_min4.249999999999993
No reset possible
7519813997Frank (Chude) QianΒ πŸ‡¨πŸ‡¦baseline-behavior-cloning New Datasetaido-LF-sim-validationsim-2of4successnogpu-production-spot-0-020:01:51
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.09474527037559266
survival_time_median4.199999999999993
deviation-center-line_median0.051538748608172986
in-drivable-lane_median3.6499999999999937


other stats
agent_compute-ego0_max0.05961399078369141
agent_compute-ego0_mean0.05961399078369141
agent_compute-ego0_median0.05961399078369141
agent_compute-ego0_min0.05961399078369141
complete-iteration_max0.16775882664848776
complete-iteration_mean0.16775882664848776
complete-iteration_median0.16775882664848776
complete-iteration_min0.16775882664848776
deviation-center-line_max0.051538748608172986
deviation-center-line_mean0.051538748608172986
deviation-center-line_min0.051538748608172986
deviation-heading_max0.377089827997732
deviation-heading_mean0.377089827997732
deviation-heading_median0.377089827997732
deviation-heading_min0.377089827997732
distance-from-start_max1.6427083841933496
distance-from-start_mean1.6427083841933496
distance-from-start_median1.6427083841933496
distance-from-start_min1.6427083841933496
driven_any_max1.643280065438764
driven_any_mean1.643280065438764
driven_any_median1.643280065438764
driven_any_min1.643280065438764
driven_lanedir_consec_max0.09474527037559266
driven_lanedir_consec_mean0.09474527037559266
driven_lanedir_consec_min0.09474527037559266
driven_lanedir_max0.09474527037559266
driven_lanedir_mean0.09474527037559266
driven_lanedir_median0.09474527037559266
driven_lanedir_min0.09474527037559266
get_duckie_state_max1.4950247371897978e-06
get_duckie_state_mean1.4950247371897978e-06
get_duckie_state_median1.4950247371897978e-06
get_duckie_state_min1.4950247371897978e-06
get_robot_state_max0.003964853286743164
get_robot_state_mean0.003964853286743164
get_robot_state_median0.003964853286743164
get_robot_state_min0.003964853286743164
get_state_dump_max0.005192349938785329
get_state_dump_mean0.005192349938785329
get_state_dump_median0.005192349938785329
get_state_dump_min0.005192349938785329
get_ui_image_max0.01868718652164235
get_ui_image_mean0.01868718652164235
get_ui_image_median0.01868718652164235
get_ui_image_min0.01868718652164235
in-drivable-lane_max3.6499999999999937
in-drivable-lane_mean3.6499999999999937
in-drivable-lane_min3.6499999999999937
per-episodes
details{"LF-norm-small_loop-000-ego0": {"driven_any": 1.643280065438764, "get_ui_image": 0.01868718652164235, "step_physics": 0.06661557029275333, "survival_time": 4.199999999999993, "driven_lanedir": 0.09474527037559266, "get_state_dump": 0.005192349938785329, "get_robot_state": 0.003964853286743164, "sim_render-ego0": 0.004318716946770163, "get_duckie_state": 1.4950247371897978e-06, "in-drivable-lane": 3.6499999999999937, "deviation-heading": 0.377089827997732, "agent_compute-ego0": 0.05961399078369141, "complete-iteration": 0.16775882664848776, "set_robot_commands": 0.002503052879782284, "distance-from-start": 1.6427083841933496, "deviation-center-line": 0.051538748608172986, "driven_lanedir_consec": 0.09474527037559266, "sim_compute_sim_state": 0.004667130638571346, "sim_compute_performance-ego0": 0.002097141041475184}}
set_robot_commands_max0.002503052879782284
set_robot_commands_mean0.002503052879782284
set_robot_commands_median0.002503052879782284
set_robot_commands_min0.002503052879782284
sim_compute_performance-ego0_max0.002097141041475184
sim_compute_performance-ego0_mean0.002097141041475184
sim_compute_performance-ego0_median0.002097141041475184
sim_compute_performance-ego0_min0.002097141041475184
sim_compute_sim_state_max0.004667130638571346
sim_compute_sim_state_mean0.004667130638571346
sim_compute_sim_state_median0.004667130638571346
sim_compute_sim_state_min0.004667130638571346
sim_render-ego0_max0.004318716946770163
sim_render-ego0_mean0.004318716946770163
sim_render-ego0_median0.004318716946770163
sim_render-ego0_min0.004318716946770163
simulation-passed1
step_physics_max0.06661557029275333
step_physics_mean0.06661557029275333
step_physics_median0.06661557029275333
step_physics_min0.06661557029275333
survival_time_max4.199999999999993
survival_time_mean4.199999999999993
survival_time_min4.199999999999993
No reset possible
7519614014YU CHENCBC Net v2 test - APR 6 anomaly + mar 28 bcaido-LFP-sim-validationsim-0of4successnogpu-production-spot-0-020:01:27
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median3.049999999999997
in-drivable-lane_median0.8500000000000006
driven_lanedir_consec_median0.2878262373018219
deviation-center-line_median0.0931751592310958


other stats
agent_compute-ego0_max0.09870269990736438
agent_compute-ego0_mean0.09870269990736438
agent_compute-ego0_median0.09870269990736438
agent_compute-ego0_min0.09870269990736438
complete-iteration_max0.29817480425680837
complete-iteration_mean0.29817480425680837
complete-iteration_median0.29817480425680837
complete-iteration_min0.29817480425680837
deviation-center-line_max0.0931751592310958
deviation-center-line_mean0.0931751592310958
deviation-center-line_min0.0931751592310958
deviation-heading_max1.2158916427287991
deviation-heading_mean1.2158916427287991
deviation-heading_median1.2158916427287991
deviation-heading_min1.2158916427287991
distance-from-start_max0.7983029748417194
distance-from-start_mean0.7983029748417194
distance-from-start_median0.7983029748417194
distance-from-start_min0.7983029748417194
driven_any_max0.8172556827547933
driven_any_mean0.8172556827547933
driven_any_median0.8172556827547933
driven_any_min0.8172556827547933
driven_lanedir_consec_max0.2878262373018219
driven_lanedir_consec_mean0.2878262373018219
driven_lanedir_consec_min0.2878262373018219
driven_lanedir_max0.2878262373018219
driven_lanedir_mean0.2878262373018219
driven_lanedir_median0.2878262373018219
driven_lanedir_min0.2878262373018219
get_duckie_state_max0.02211930674891318
get_duckie_state_mean0.02211930674891318
get_duckie_state_median0.02211930674891318
get_duckie_state_min0.02211930674891318
get_robot_state_max0.004102472336061539
get_robot_state_mean0.004102472336061539
get_robot_state_median0.004102472336061539
get_robot_state_min0.004102472336061539
get_state_dump_max0.008606545386775848
get_state_dump_mean0.008606545386775848
get_state_dump_median0.008606545386775848
get_state_dump_min0.008606545386775848
get_ui_image_max0.0255556914114183
get_ui_image_mean0.0255556914114183
get_ui_image_median0.0255556914114183
get_ui_image_min0.0255556914114183
in-drivable-lane_max0.8500000000000006
in-drivable-lane_mean0.8500000000000006
in-drivable-lane_min0.8500000000000006
per-episodes
details{"LFP-norm-zigzag-000-ego0": {"driven_any": 0.8172556827547933, "get_ui_image": 0.0255556914114183, "step_physics": 0.11755856006376204, "survival_time": 3.049999999999997, "driven_lanedir": 0.2878262373018219, "get_state_dump": 0.008606545386775848, "get_robot_state": 0.004102472336061539, "sim_render-ego0": 0.00425578317334575, "get_duckie_state": 0.02211930674891318, "in-drivable-lane": 0.8500000000000006, "deviation-heading": 1.2158916427287991, "agent_compute-ego0": 0.09870269990736438, "complete-iteration": 0.29817480425680837, "set_robot_commands": 0.002618020580660912, "distance-from-start": 0.7983029748417194, "deviation-center-line": 0.0931751592310958, "driven_lanedir_consec": 0.2878262373018219, "sim_compute_sim_state": 0.012186961789284982, "sim_compute_performance-ego0": 0.002341243528550671}}
set_robot_commands_max0.002618020580660912
set_robot_commands_mean0.002618020580660912
set_robot_commands_median0.002618020580660912
set_robot_commands_min0.002618020580660912
sim_compute_performance-ego0_max0.002341243528550671
sim_compute_performance-ego0_mean0.002341243528550671
sim_compute_performance-ego0_median0.002341243528550671
sim_compute_performance-ego0_min0.002341243528550671
sim_compute_sim_state_max0.012186961789284982
sim_compute_sim_state_mean0.012186961789284982
sim_compute_sim_state_median0.012186961789284982
sim_compute_sim_state_min0.012186961789284982
sim_render-ego0_max0.00425578317334575
sim_render-ego0_mean0.00425578317334575
sim_render-ego0_median0.00425578317334575
sim_render-ego0_min0.00425578317334575
simulation-passed1
step_physics_max0.11755856006376204
step_physics_mean0.11755856006376204
step_physics_median0.11755856006376204
step_physics_min0.11755856006376204
survival_time_max3.049999999999997
survival_time_mean3.049999999999997
survival_time_min3.049999999999997
No reset possible
7519414014YU CHENCBC Net v2 test - APR 6 anomaly + mar 28 bcaido-LFP-sim-validationsim-0of4successnogpu-production-spot-0-020:01:31
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median3.049999999999997
in-drivable-lane_median0.8000000000000006
driven_lanedir_consec_median0.3200059883738431
deviation-center-line_median0.10044436209950008


other stats
agent_compute-ego0_max0.09623615203365204
agent_compute-ego0_mean0.09623615203365204
agent_compute-ego0_median0.09623615203365204
agent_compute-ego0_min0.09623615203365204
complete-iteration_max0.2962409104070356
complete-iteration_mean0.2962409104070356
complete-iteration_median0.2962409104070356
complete-iteration_min0.2962409104070356
deviation-center-line_max0.10044436209950008
deviation-center-line_mean0.10044436209950008
deviation-center-line_min0.10044436209950008
deviation-heading_max1.1680917465337428
deviation-heading_mean1.1680917465337428
deviation-heading_median1.1680917465337428
deviation-heading_min1.1680917465337428
distance-from-start_max0.8040814721864905
distance-from-start_mean0.8040814721864905
distance-from-start_median0.8040814721864905
distance-from-start_min0.8040814721864905
driven_any_max0.8251692374356733
driven_any_mean0.8251692374356733
driven_any_median0.8251692374356733
driven_any_min0.8251692374356733
driven_lanedir_consec_max0.3200059883738431
driven_lanedir_consec_mean0.3200059883738431
driven_lanedir_consec_min0.3200059883738431
driven_lanedir_max0.3200059883738431
driven_lanedir_mean0.3200059883738431
driven_lanedir_median0.3200059883738431
driven_lanedir_min0.3200059883738431
get_duckie_state_max0.021388146185105848
get_duckie_state_mean0.021388146185105848
get_duckie_state_median0.021388146185105848
get_duckie_state_min0.021388146185105848
get_robot_state_max0.00404123721584197
get_robot_state_mean0.00404123721584197
get_robot_state_median0.00404123721584197
get_robot_state_min0.00404123721584197
get_state_dump_max0.008299935248590285
get_state_dump_mean0.008299935248590285
get_state_dump_median0.008299935248590285
get_state_dump_min0.008299935248590285
get_ui_image_max0.026020569186056813
get_ui_image_mean0.026020569186056813
get_ui_image_median0.026020569186056813
get_ui_image_min0.026020569186056813
in-drivable-lane_max0.8000000000000006
in-drivable-lane_mean0.8000000000000006
in-drivable-lane_min0.8000000000000006
per-episodes
details{"LFP-norm-zigzag-000-ego0": {"driven_any": 0.8251692374356733, "get_ui_image": 0.026020569186056813, "step_physics": 0.11951704563633088, "survival_time": 3.049999999999997, "driven_lanedir": 0.3200059883738431, "get_state_dump": 0.008299935248590285, "get_robot_state": 0.00404123721584197, "sim_render-ego0": 0.004119938419711205, "get_duckie_state": 0.021388146185105848, "in-drivable-lane": 0.8000000000000006, "deviation-heading": 1.1680917465337428, "agent_compute-ego0": 0.09623615203365204, "complete-iteration": 0.2962409104070356, "set_robot_commands": 0.002440533330363612, "distance-from-start": 0.8040814721864905, "deviation-center-line": 0.10044436209950008, "driven_lanedir_consec": 0.3200059883738431, "sim_compute_sim_state": 0.011907046841036889, "sim_compute_performance-ego0": 0.002165663626886183}}
set_robot_commands_max0.002440533330363612
set_robot_commands_mean0.002440533330363612
set_robot_commands_median0.002440533330363612
set_robot_commands_min0.002440533330363612
sim_compute_performance-ego0_max0.002165663626886183
sim_compute_performance-ego0_mean0.002165663626886183
sim_compute_performance-ego0_median0.002165663626886183
sim_compute_performance-ego0_min0.002165663626886183
sim_compute_sim_state_max0.011907046841036889
sim_compute_sim_state_mean0.011907046841036889
sim_compute_sim_state_median0.011907046841036889
sim_compute_sim_state_min0.011907046841036889
sim_render-ego0_max0.004119938419711205
sim_render-ego0_mean0.004119938419711205
sim_render-ego0_median0.004119938419711205
sim_render-ego0_min0.004119938419711205
simulation-passed1
step_physics_max0.11951704563633088
step_physics_mean0.11951704563633088
step_physics_median0.11951704563633088
step_physics_min0.11951704563633088
survival_time_max3.049999999999997
survival_time_mean3.049999999999997
survival_time_min3.049999999999997
No reset possible
7519114032YU CHENCBC V2, mar28 bc, mar31_apr6 anomaly aido-LFP-sim-validationsim-1of4successnogpu-production-spot-0-020:02:46
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median12.600000000000044
in-drivable-lane_median4.200000000000012
driven_lanedir_consec_median2.661972796250848
deviation-center-line_median0.5656103241031356


other stats
agent_compute-ego0_max0.10134834545874312
agent_compute-ego0_mean0.10134834545874312
agent_compute-ego0_median0.10134834545874312
agent_compute-ego0_min0.10134834545874312
complete-iteration_max0.25926447480092407
complete-iteration_mean0.25926447480092407
complete-iteration_median0.25926447480092407
complete-iteration_min0.25926447480092407
deviation-center-line_max0.5656103241031356
deviation-center-line_mean0.5656103241031356
deviation-center-line_min0.5656103241031356
deviation-heading_max3.918641911385643
deviation-heading_mean3.918641911385643
deviation-heading_median3.918641911385643
deviation-heading_min3.918641911385643
distance-from-start_max1.1175733269217405
distance-from-start_mean1.1175733269217405
distance-from-start_median1.1175733269217405
distance-from-start_min1.1175733269217405
driven_any_max4.632224920232811
driven_any_mean4.632224920232811
driven_any_median4.632224920232811
driven_any_min4.632224920232811
driven_lanedir_consec_max2.661972796250848
driven_lanedir_consec_mean2.661972796250848
driven_lanedir_consec_min2.661972796250848
driven_lanedir_max2.661972796250848
driven_lanedir_mean2.661972796250848
driven_lanedir_median2.661972796250848
driven_lanedir_min2.661972796250848
get_duckie_state_max0.004640687595714222
get_duckie_state_mean0.004640687595714222
get_duckie_state_median0.004640687595714222
get_duckie_state_min0.004640687595714222
get_robot_state_max0.0040183915451110115
get_robot_state_mean0.0040183915451110115
get_robot_state_median0.0040183915451110115
get_robot_state_min0.0040183915451110115
get_state_dump_max0.0059291930066738205
get_state_dump_mean0.0059291930066738205
get_state_dump_median0.0059291930066738205
get_state_dump_min0.0059291930066738205
get_ui_image_max0.01927294278804493
get_ui_image_mean0.01927294278804493
get_ui_image_median0.01927294278804493
get_ui_image_min0.01927294278804493
in-drivable-lane_max4.200000000000012
in-drivable-lane_mean4.200000000000012
in-drivable-lane_min4.200000000000012
per-episodes
details{"LFP-norm-small_loop-000-ego0": {"driven_any": 4.632224920232811, "get_ui_image": 0.01927294278804493, "step_physics": 0.10921252292135487, "survival_time": 12.600000000000044, "driven_lanedir": 2.661972796250848, "get_state_dump": 0.0059291930066738205, "get_robot_state": 0.0040183915451110115, "sim_render-ego0": 0.00413424506960179, "get_duckie_state": 0.004640687595714222, "in-drivable-lane": 4.200000000000012, "deviation-heading": 3.918641911385643, "agent_compute-ego0": 0.10134834545874312, "complete-iteration": 0.25926447480092407, "set_robot_commands": 0.0025413375598168655, "distance-from-start": 1.1175733269217405, "deviation-center-line": 0.5656103241031356, "driven_lanedir_consec": 2.661972796250848, "sim_compute_sim_state": 0.005947711439471942, "sim_compute_performance-ego0": 0.0021216492407877927}}
set_robot_commands_max0.0025413375598168655
set_robot_commands_mean0.0025413375598168655
set_robot_commands_median0.0025413375598168655
set_robot_commands_min0.0025413375598168655
sim_compute_performance-ego0_max0.0021216492407877927
sim_compute_performance-ego0_mean0.0021216492407877927
sim_compute_performance-ego0_median0.0021216492407877927
sim_compute_performance-ego0_min0.0021216492407877927
sim_compute_sim_state_max0.005947711439471942
sim_compute_sim_state_mean0.005947711439471942
sim_compute_sim_state_median0.005947711439471942
sim_compute_sim_state_min0.005947711439471942
sim_render-ego0_max0.00413424506960179
sim_render-ego0_mean0.00413424506960179
sim_render-ego0_median0.00413424506960179
sim_render-ego0_min0.00413424506960179
simulation-passed1
step_physics_max0.10921252292135487
step_physics_mean0.10921252292135487
step_physics_median0.10921252292135487
step_physics_min0.10921252292135487
survival_time_max12.600000000000044
survival_time_mean12.600000000000044
survival_time_min12.600000000000044
No reset possible
7518814032YU CHENCBC V2, mar28 bc, mar31_apr6 anomaly aido-LFP-sim-validationsim-1of4successnogpu-production-spot-0-020:01:49
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median7.199999999999982
in-drivable-lane_median3.699999999999988
driven_lanedir_consec_median1.2367710810257213
deviation-center-line_median0.25397917109009105


other stats
agent_compute-ego0_max0.08878877738426472
agent_compute-ego0_mean0.08878877738426472
agent_compute-ego0_median0.08878877738426472
agent_compute-ego0_min0.08878877738426472
complete-iteration_max0.2245027196818385
complete-iteration_mean0.2245027196818385
complete-iteration_median0.2245027196818385
complete-iteration_min0.2245027196818385
deviation-center-line_max0.25397917109009105
deviation-center-line_mean0.25397917109009105
deviation-center-line_min0.25397917109009105
deviation-heading_max1.4132167242456763
deviation-heading_mean1.4132167242456763
deviation-heading_median1.4132167242456763
deviation-heading_min1.4132167242456763
distance-from-start_max1.2787073495703185
distance-from-start_mean1.2787073495703185
distance-from-start_median1.2787073495703185
distance-from-start_min1.2787073495703185
driven_any_max2.2058485248635016
driven_any_mean2.2058485248635016
driven_any_median2.2058485248635016
driven_any_min2.2058485248635016
driven_lanedir_consec_max1.2367710810257213
driven_lanedir_consec_mean1.2367710810257213
driven_lanedir_consec_min1.2367710810257213
driven_lanedir_max1.2367710810257213
driven_lanedir_mean1.2367710810257213
driven_lanedir_median1.2367710810257213
driven_lanedir_min1.2367710810257213
get_duckie_state_max0.004104533688775424
get_duckie_state_mean0.004104533688775424
get_duckie_state_median0.004104533688775424
get_duckie_state_min0.004104533688775424
get_robot_state_max0.003572758312883048
get_robot_state_mean0.003572758312883048
get_robot_state_median0.003572758312883048
get_robot_state_min0.003572758312883048
get_state_dump_max0.005274634525693696
get_state_dump_mean0.005274634525693696
get_state_dump_median0.005274634525693696
get_state_dump_min0.005274634525693696
get_ui_image_max0.018174926165876716
get_ui_image_mean0.018174926165876716
get_ui_image_median0.018174926165876716
get_ui_image_min0.018174926165876716
in-drivable-lane_max3.699999999999988
in-drivable-lane_mean3.699999999999988
in-drivable-lane_min3.699999999999988
per-episodes
details{"LFP-norm-small_loop-000-ego0": {"driven_any": 2.2058485248635016, "get_ui_image": 0.018174926165876716, "step_physics": 0.09148117427168222, "survival_time": 7.199999999999982, "driven_lanedir": 1.2367710810257213, "get_state_dump": 0.005274634525693696, "get_robot_state": 0.003572758312883048, "sim_render-ego0": 0.003747770704072097, "get_duckie_state": 0.004104533688775424, "in-drivable-lane": 3.699999999999988, "deviation-heading": 1.4132167242456763, "agent_compute-ego0": 0.08878877738426472, "complete-iteration": 0.2245027196818385, "set_robot_commands": 0.0022134271161309603, "distance-from-start": 1.2787073495703185, "deviation-center-line": 0.25397917109009105, "driven_lanedir_consec": 1.2367710810257213, "sim_compute_sim_state": 0.005161497510712722, "sim_compute_performance-ego0": 0.0018860060593177532}}
set_robot_commands_max0.0022134271161309603
set_robot_commands_mean0.0022134271161309603
set_robot_commands_median0.0022134271161309603
set_robot_commands_min0.0022134271161309603
sim_compute_performance-ego0_max0.0018860060593177532
sim_compute_performance-ego0_mean0.0018860060593177532
sim_compute_performance-ego0_median0.0018860060593177532
sim_compute_performance-ego0_min0.0018860060593177532
sim_compute_sim_state_max0.005161497510712722
sim_compute_sim_state_mean0.005161497510712722
sim_compute_sim_state_median0.005161497510712722
sim_compute_sim_state_min0.005161497510712722
sim_render-ego0_max0.003747770704072097
sim_render-ego0_mean0.003747770704072097
sim_render-ego0_median0.003747770704072097
sim_render-ego0_min0.003747770704072097
simulation-passed1
step_physics_max0.09148117427168222
step_physics_mean0.09148117427168222
step_physics_median0.09148117427168222
step_physics_min0.09148117427168222
survival_time_max7.199999999999982
survival_time_mean7.199999999999982
survival_time_min7.199999999999982
No reset possible
7517814036YU CHENCBC V2 non dropout comparsion, mar28_apr6 bc, mar31_apr6 anomaly aido-LFP-sim-validationsim-3of4successnogpu-production-spot-0-020:14:01
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median59.99999999999873
in-drivable-lane_median17.249999999999744
driven_lanedir_consec_median14.974026035070274
deviation-center-line_median4.052649522647632


other stats
agent_compute-ego0_max0.0954578444920015
agent_compute-ego0_mean0.0954578444920015
agent_compute-ego0_median0.0954578444920015
agent_compute-ego0_min0.0954578444920015
complete-iteration_max0.3165451950276523
complete-iteration_mean0.3165451950276523
complete-iteration_median0.3165451950276523
complete-iteration_min0.3165451950276523
deviation-center-line_max4.052649522647632
deviation-center-line_mean4.052649522647632
deviation-center-line_min4.052649522647632
deviation-heading_max9.795018669595931
deviation-heading_mean9.795018669595931
deviation-heading_median9.795018669595931
deviation-heading_min9.795018669595931
distance-from-start_max3.2355118490500145
distance-from-start_mean3.2355118490500145
distance-from-start_median3.2355118490500145
distance-from-start_min3.2355118490500145
driven_any_max22.436984663436583
driven_any_mean22.436984663436583
driven_any_median22.436984663436583
driven_any_min22.436984663436583
driven_lanedir_consec_max14.974026035070274
driven_lanedir_consec_mean14.974026035070274
driven_lanedir_consec_min14.974026035070274
driven_lanedir_max14.974026035070274
driven_lanedir_mean14.974026035070274
driven_lanedir_median14.974026035070274
driven_lanedir_min14.974026035070274
get_duckie_state_max0.021937653980683924
get_duckie_state_mean0.021937653980683924
get_duckie_state_median0.021937653980683924
get_duckie_state_min0.021937653980683924
get_robot_state_max0.004050439839359128
get_robot_state_mean0.004050439839359128
get_robot_state_median0.004050439839359128
get_robot_state_min0.004050439839359128
get_state_dump_max0.008421061140214474
get_state_dump_mean0.008421061140214474
get_state_dump_median0.008421061140214474
get_state_dump_min0.008421061140214474
get_ui_image_max0.024570681868147395
get_ui_image_mean0.024570681868147395
get_ui_image_median0.024570681868147395
get_ui_image_min0.024570681868147395
in-drivable-lane_max17.249999999999744
in-drivable-lane_mean17.249999999999744
in-drivable-lane_min17.249999999999744
per-episodes
details{"LFP-norm-techtrack-000-ego0": {"driven_any": 22.436984663436583, "get_ui_image": 0.024570681868147395, "step_physics": 0.14109186089902398, "survival_time": 59.99999999999873, "driven_lanedir": 14.974026035070274, "get_state_dump": 0.008421061140214474, "get_robot_state": 0.004050439839359128, "sim_render-ego0": 0.004146370859964007, "get_duckie_state": 0.021937653980683924, "in-drivable-lane": 17.249999999999744, "deviation-heading": 9.795018669595931, "agent_compute-ego0": 0.0954578444920015, "complete-iteration": 0.3165451950276523, "set_robot_commands": 0.002463303636650956, "distance-from-start": 3.2355118490500145, "deviation-center-line": 4.052649522647632, "driven_lanedir_consec": 14.974026035070274, "sim_compute_sim_state": 0.012134222265683444, "sim_compute_performance-ego0": 0.0021585072208503003}}
set_robot_commands_max0.002463303636650956
set_robot_commands_mean0.002463303636650956
set_robot_commands_median0.002463303636650956
set_robot_commands_min0.002463303636650956
sim_compute_performance-ego0_max0.0021585072208503003
sim_compute_performance-ego0_mean0.0021585072208503003
sim_compute_performance-ego0_median0.0021585072208503003
sim_compute_performance-ego0_min0.0021585072208503003
sim_compute_sim_state_max0.012134222265683444
sim_compute_sim_state_mean0.012134222265683444
sim_compute_sim_state_median0.012134222265683444
sim_compute_sim_state_min0.012134222265683444
sim_render-ego0_max0.004146370859964007
sim_render-ego0_mean0.004146370859964007
sim_render-ego0_median0.004146370859964007
sim_render-ego0_min0.004146370859964007
simulation-passed1
step_physics_max0.14109186089902398
step_physics_mean0.14109186089902398
step_physics_median0.14109186089902398
step_physics_min0.14109186089902398
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7517413504AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LF-sim-testingsim-2of4failednogpu-production-spot-0-020:00:39
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7516713504AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LF-sim-testingsim-2of4failednogpu-production-spot-0-020:00:41
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7516313504AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LF-sim-testingsim-2of4failednogpu-production-spot-0-020:00:44
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7515913511AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFP-sim-validationsim-3of4failednogpu-production-spot-0-020:00:40
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7515713504AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LF-sim-testingsim-0of4failednogpu-production-spot-0-020:01:07
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor of shape [16] and type float
              || 	 [[{{node default_policy/conv1/bias/Initializer/zeros}}, {{node default_policy/conv_value_1/bias/Initializer/zeros}}]]
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 322, in _initialize_loss
              ||     self._sess.run(tf.global_variables_initializer())
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor of shape [16] and type float
              || 	 [[node default_policy/conv1/bias/Initializer/zeros (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:30) , node default_policy/conv_value_1/bias/Initializer/zeros (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:75) ]]
              ||
              || Original stack trace for 'default_policy/conv1/bias/Initializer/zeros':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 147, in __init__
              ||     self.model = ModelCatalog.get_model_v2(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/catalog.py", line 347, in get_model_v2
              ||     return wrapper(obs_space, action_space, num_outputs, model_config,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 30, in __init__
              ||     last_layer = tf.keras.layers.Conv2D(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 757, in __call__
              ||     self._maybe_build(inputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 2098, in _maybe_build
              ||     self.build(input_shapes)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 206, in build
              ||     self.bias = self.add_weight(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 431, in add_weight
              ||     variable = self._add_variable_with_custom_getter(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/tracking/base.py", line 745, in _add_variable_with_custom_getter
              ||     new_variable = getter(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 133, in make_variable
              ||     return tf_variables.VariableV1(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 260, in __call__
              ||     return cls._variable_v1_call(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 206, in _variable_v1_call
              ||     return previous_getter(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 199, in <lambda>
              ||     previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variable_scope.py", line 2583, in default_variable_creator
              ||     return resource_variable_ops.ResourceVariable(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 264, in __call__
              ||     return super(VariableMetaclass, cls).__call__(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1507, in __init__
              ||     self._init_from_args(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1651, in _init_from_args
              ||     initial_value() if init_from_fn else initial_value,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/init_ops.py", line 114, in __call__
              ||     return array_ops.zeros(shape, dtype)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
              ||     return target(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py", line 2747, in wrapped
              ||     tensor = fun(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py", line 2794, in zeros
              ||     output = _constant_if_small(zero, shape, dtype, name)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py", line 2733, in _constant_if_small
              ||     return constant(value, shape=shape, dtype=dtype, name=name)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py", line 263, in constant
              ||     return _constant_impl(value, dtype, shape, name, verify_shape=False,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py", line 285, in _constant_impl
              ||     const_tensor = g._create_op_internal(  # pylint: disable=protected-access
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor of shape [16] and type float
              || | 	 [[{{node default_policy/conv1/bias/Initializer/zeros}}, {{node default_policy/conv_value_1/bias/Initializer/zeros}}]]
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 322, in _initialize_loss
              || |     self._sess.run(tf.global_variables_initializer())
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor of shape [16] and type float
              || | 	 [[node default_policy/conv1/bias/Initializer/zeros (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:30) , node default_policy/conv_value_1/bias/Initializer/zeros (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:75) ]]
              || |
              || | Original stack trace for 'default_policy/conv1/bias/Initializer/zeros':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 147, in __init__
              || |     self.model = ModelCatalog.get_model_v2(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/catalog.py", line 347, in get_model_v2
              || |     return wrapper(obs_space, action_space, num_outputs, model_config,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 30, in __init__
              || |     last_layer = tf.keras.layers.Conv2D(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 757, in __call__
              || |     self._maybe_build(inputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 2098, in _maybe_build
              || |     self.build(input_shapes)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 206, in build
              || |     self.bias = self.add_weight(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 431, in add_weight
              || |     variable = self._add_variable_with_custom_getter(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/tracking/base.py", line 745, in _add_variable_with_custom_getter
              || |     new_variable = getter(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 133, in make_variable
              || |     return tf_variables.VariableV1(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 260, in __call__
              || |     return cls._variable_v1_call(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 206, in _variable_v1_call
              || |     return previous_getter(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 199, in <lambda>
              || |     previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variable_scope.py", line 2583, in default_variable_creator
              || |     return resource_variable_ops.ResourceVariable(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 264, in __call__
              || |     return super(VariableMetaclass, cls).__call__(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1507, in __init__
              || |     self._init_from_args(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1651, in _init_from_args
              || |     initial_value() if init_from_fn else initial_value,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/init_ops.py", line 114, in __call__
              || |     return array_ops.zeros(shape, dtype)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
              || |     return target(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py", line 2747, in wrapped
              || |     tensor = fun(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py", line 2794, in zeros
              || |     output = _constant_if_small(zero, shape, dtype, name)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py", line 2733, in _constant_if_small
              || |     return constant(value, shape=shape, dtype=dtype, name=name)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py", line 263, in constant
              || |     return _constant_impl(value, dtype, shape, name, verify_shape=False,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py", line 285, in _constant_impl
              || |     const_tensor = g._create_op_internal(  # pylint: disable=protected-access
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7515613504AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LF-sim-testingsim-0of4failednogpu-production-spot-0-020:00:41
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7515213511AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFP-sim-validationsim-0of4failednogpu-production-spot-0-020:00:39
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __i