Duckietown Challenges Home Challenges Submissions

Evaluator 5101

ID5101
evaluatorgpu-production-spot-2-05
ownerI don't have one πŸ˜€
machinegpu-prod_9d1e166cd184
processgpu-production-spot-2-05_9d1e166cd184
version6.2.7
first heard
last heard
statusinactive
# evaluating
# success17 65883
# timeout
# failed4 66085
# error
# aborted1 66292
# host-error3 66141
arm0
x86_641
Mac0
gpu available1
Number of processors64
Processor frequency (MHz)0.0 GHz
Free % of processors99%
RAM total (MB)249.0 GB
RAM free (MB)238.0 GB
Disk (MB)969.3 GB
Disk available (MB)866.7 GB
Docker Hub
P11
P2
Cloud simulations1
PI Camera0
# Duckiebots0
Map 3x3 avaiable
Number of duckies
gpu cores
AIDO 2 Map LF public
AIDO 2 Map LF private
AIDO 2 Map LFV public
AIDO 2 Map LFV private
AIDO 2 Map LFVI public
AIDO 2 Map LFVI private
AIDO 3 Map LF public
AIDO 3 Map LF private
AIDO 3 Map LFV public
AIDO 3 Map LFV private
AIDO 3 Map LFVI public
AIDO 3 Map LFVI private
AIDO 5 Map large loop
ETU track
for 2021, map is ETH_small_inter
IPFS mountpoint /ipfs available
IPNS mountpoint /ipns available

Evaluator jobs

Job IDsubmissionuseruser labelchallengestepstatusup to dateevaluatordate starteddate completeddurationmessage
6631313912YU CHENCBC Net v2 - testaido-LFP-sim-validationsim-3of4successnogpu-production-spot-2-050:01:53
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median5.09999999999999
in-drivable-lane_median0.6499999999999977
driven_lanedir_consec_median1.436249872723809
deviation-center-line_median0.2572912890018666


other stats
agent_compute-ego0_max0.0971953151295486
agent_compute-ego0_mean0.0971953151295486
agent_compute-ego0_median0.0971953151295486
agent_compute-ego0_min0.0971953151295486
complete-iteration_max0.34681148436462994
complete-iteration_mean0.34681148436462994
complete-iteration_median0.34681148436462994
complete-iteration_min0.34681148436462994
deviation-center-line_max0.2572912890018666
deviation-center-line_mean0.2572912890018666
deviation-center-line_min0.2572912890018666
deviation-heading_max1.3066783607690309
deviation-heading_mean1.3066783607690309
deviation-heading_median1.3066783607690309
deviation-heading_min1.3066783607690309
driven_any_max1.8201455625439784
driven_any_mean1.8201455625439784
driven_any_median1.8201455625439784
driven_any_min1.8201455625439784
driven_lanedir_consec_max1.436249872723809
driven_lanedir_consec_mean1.436249872723809
driven_lanedir_consec_min1.436249872723809
driven_lanedir_max1.436249872723809
driven_lanedir_mean1.436249872723809
driven_lanedir_median1.436249872723809
driven_lanedir_min1.436249872723809
get_duckie_state_max0.022177430032526407
get_duckie_state_mean0.022177430032526407
get_duckie_state_median0.022177430032526407
get_duckie_state_min0.022177430032526407
get_robot_state_max0.0039754029616568855
get_robot_state_mean0.0039754029616568855
get_robot_state_median0.0039754029616568855
get_robot_state_min0.0039754029616568855
get_state_dump_max0.008462165165873408
get_state_dump_mean0.008462165165873408
get_state_dump_median0.008462165165873408
get_state_dump_min0.008462165165873408
get_ui_image_max0.03739741010573304
get_ui_image_mean0.03739741010573304
get_ui_image_median0.03739741010573304
get_ui_image_min0.03739741010573304
in-drivable-lane_max0.6499999999999977
in-drivable-lane_mean0.6499999999999977
in-drivable-lane_min0.6499999999999977
per-episodes
details{"LFP-norm-techtrack-000-ego0": {"driven_any": 1.8201455625439784, "get_ui_image": 0.03739741010573304, "step_physics": 0.1552537436624175, "survival_time": 5.09999999999999, "driven_lanedir": 1.436249872723809, "get_state_dump": 0.008462165165873408, "get_robot_state": 0.0039754029616568855, "sim_render-ego0": 0.004205844934704234, "get_duckie_state": 0.022177430032526407, "in-drivable-lane": 0.6499999999999977, "deviation-heading": 1.3066783607690309, "agent_compute-ego0": 0.0971953151295486, "complete-iteration": 0.34681148436462994, "set_robot_commands": 0.0025929339881082185, "deviation-center-line": 0.2572912890018666, "driven_lanedir_consec": 1.436249872723809, "sim_compute_sim_state": 0.013196429002632216, "sim_compute_performance-ego0": 0.002239724964771456}}
set_robot_commands_max0.0025929339881082185
set_robot_commands_mean0.0025929339881082185
set_robot_commands_median0.0025929339881082185
set_robot_commands_min0.0025929339881082185
sim_compute_performance-ego0_max0.002239724964771456
sim_compute_performance-ego0_mean0.002239724964771456
sim_compute_performance-ego0_median0.002239724964771456
sim_compute_performance-ego0_min0.002239724964771456
sim_compute_sim_state_max0.013196429002632216
sim_compute_sim_state_mean0.013196429002632216
sim_compute_sim_state_median0.013196429002632216
sim_compute_sim_state_min0.013196429002632216
sim_render-ego0_max0.004205844934704234
sim_render-ego0_mean0.004205844934704234
sim_render-ego0_median0.004205844934704234
sim_render-ego0_min0.004205844934704234
simulation-passed1
step_physics_max0.1552537436624175
step_physics_mean0.1552537436624175
step_physics_median0.1552537436624175
step_physics_min0.1552537436624175
survival_time_max5.09999999999999
survival_time_mean5.09999999999999
survival_time_min5.09999999999999
No reset possible
6629913912YU CHENCBC Net v2 - testaido-LFP-sim-validationsim-3of4successnogpu-production-spot-2-050:01:51
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median4.99999999999999
in-drivable-lane_median0.7999999999999972
driven_lanedir_consec_median1.3965112543992717
deviation-center-line_median0.2463156055494699


other stats
agent_compute-ego0_max0.10038291345728506
agent_compute-ego0_mean0.10038291345728506
agent_compute-ego0_median0.10038291345728506
agent_compute-ego0_min0.10038291345728506
complete-iteration_max0.36815290167780207
complete-iteration_mean0.36815290167780207
complete-iteration_median0.36815290167780207
complete-iteration_min0.36815290167780207
deviation-center-line_max0.2463156055494699
deviation-center-line_mean0.2463156055494699
deviation-center-line_min0.2463156055494699
deviation-heading_max1.2448317668538065
deviation-heading_mean1.2448317668538065
deviation-heading_median1.2448317668538065
deviation-heading_min1.2448317668538065
driven_any_max1.8034921373295267
driven_any_mean1.8034921373295267
driven_any_median1.8034921373295267
driven_any_min1.8034921373295267
driven_lanedir_consec_max1.3965112543992717
driven_lanedir_consec_mean1.3965112543992717
driven_lanedir_consec_min1.3965112543992717
driven_lanedir_max1.3965112543992717
driven_lanedir_mean1.3965112543992717
driven_lanedir_median1.3965112543992717
driven_lanedir_min1.3965112543992717
get_duckie_state_max0.022400228103788777
get_duckie_state_mean0.022400228103788777
get_duckie_state_median0.022400228103788777
get_duckie_state_min0.022400228103788777
get_robot_state_max0.004132360515027943
get_robot_state_mean0.004132360515027943
get_robot_state_median0.004132360515027943
get_robot_state_min0.004132360515027943
get_state_dump_max0.008805534627177928
get_state_dump_mean0.008805534627177928
get_state_dump_median0.008805534627177928
get_state_dump_min0.008805534627177928
get_ui_image_max0.0383539341463901
get_ui_image_mean0.0383539341463901
get_ui_image_median0.0383539341463901
get_ui_image_min0.0383539341463901
in-drivable-lane_max0.7999999999999972
in-drivable-lane_mean0.7999999999999972
in-drivable-lane_min0.7999999999999972
per-episodes
details{"LFP-norm-techtrack-000-ego0": {"driven_any": 1.8034921373295267, "get_ui_image": 0.0383539341463901, "step_physics": 0.170644745968356, "survival_time": 4.99999999999999, "driven_lanedir": 1.3965112543992717, "get_state_dump": 0.008805534627177928, "get_robot_state": 0.004132360515027943, "sim_render-ego0": 0.004228197702086798, "get_duckie_state": 0.022400228103788777, "in-drivable-lane": 0.7999999999999972, "deviation-heading": 1.2448317668538065, "agent_compute-ego0": 0.10038291345728506, "complete-iteration": 0.36815290167780207, "set_robot_commands": 0.002456759462262144, "deviation-center-line": 0.2463156055494699, "driven_lanedir_consec": 1.3965112543992717, "sim_compute_sim_state": 0.014327112991030854, "sim_compute_performance-ego0": 0.0023000027873728533}}
set_robot_commands_max0.002456759462262144
set_robot_commands_mean0.002456759462262144
set_robot_commands_median0.002456759462262144
set_robot_commands_min0.002456759462262144
sim_compute_performance-ego0_max0.0023000027873728533
sim_compute_performance-ego0_mean0.0023000027873728533
sim_compute_performance-ego0_median0.0023000027873728533
sim_compute_performance-ego0_min0.0023000027873728533
sim_compute_sim_state_max0.014327112991030854
sim_compute_sim_state_mean0.014327112991030854
sim_compute_sim_state_median0.014327112991030854
sim_compute_sim_state_min0.014327112991030854
sim_render-ego0_max0.004228197702086798
sim_render-ego0_mean0.004228197702086798
sim_render-ego0_median0.004228197702086798
sim_render-ego0_min0.004228197702086798
simulation-passed1
step_physics_max0.170644745968356
step_physics_mean0.170644745968356
step_physics_median0.170644745968356
step_physics_min0.170644745968356
survival_time_max4.99999999999999
survival_time_mean4.99999999999999
survival_time_min4.99999999999999
No reset possible
6629213798Nicholas Kostelniktemplate-randomaido-hello-sim-validation370abortednogpu-production-spot-2-050:00:22
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 261, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/create?tag=sha256%3Ab13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691&fromImage=docker.io%2Fnitaigao%2Faido-submissions

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 65, in docker_pull
    pulling = client.api.pull(repository=repository, tag=br.tag, stream=True, decode=True)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/image.py", line 415, in pull
    self._raise_for_status(response)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 263, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/local/lib/python3.8/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.ImageNotFound: 404 Client Error: Not Found ("pull access denied for nitaigao/aido-submissions, repository does not exist or may require 'docker login': denied: requested access to the resource is denied")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 38, in docker_pull_retry
    return docker_pull(client, image_name, quiet=quiet)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 84, in docker_pull
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: Cannot pull repo  docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691  tag  None

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 745, in get_cr
    cr = run_single(
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 944, in run_single
    docker_pull_retry(client, image, ntimes=4, wait=5)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 42, in docker_pull_retry
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: After trying 4 I still could not pull docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6628113694Samuel Alexandertemplate-tensorflowaido-LF-sim-validationsim-3of4successnogpu-production-spot-2-050:02:13
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median1.0858933315577182
survival_time_median6.5999999999999845
deviation-center-line_median0.24778230221916572
in-drivable-lane_median2.699999999999995


other stats
agent_compute-ego0_max0.032817370909497254
agent_compute-ego0_mean0.032817370909497254
agent_compute-ego0_median0.032817370909497254
agent_compute-ego0_min0.032817370909497254
complete-iteration_max0.323935325880696
complete-iteration_mean0.323935325880696
complete-iteration_median0.323935325880696
complete-iteration_min0.323935325880696
deviation-center-line_max0.24778230221916572
deviation-center-line_mean0.24778230221916572
deviation-center-line_min0.24778230221916572
deviation-heading_max2.004564146114112
deviation-heading_mean2.004564146114112
deviation-heading_median2.004564146114112
deviation-heading_min2.004564146114112
driven_any_max1.9357744252803708
driven_any_mean1.9357744252803708
driven_any_median1.9357744252803708
driven_any_min1.9357744252803708
driven_lanedir_consec_max1.0858933315577182
driven_lanedir_consec_mean1.0858933315577182
driven_lanedir_consec_min1.0858933315577182
driven_lanedir_max1.0858933315577182
driven_lanedir_mean1.0858933315577182
driven_lanedir_median1.0858933315577182
driven_lanedir_min1.0858933315577182
get_duckie_state_max1.670722674606438e-06
get_duckie_state_mean1.670722674606438e-06
get_duckie_state_median1.670722674606438e-06
get_duckie_state_min1.670722674606438e-06
get_robot_state_max0.00439651926657311
get_robot_state_mean0.00439651926657311
get_robot_state_median0.00439651926657311
get_robot_state_min0.00439651926657311
get_state_dump_max0.005573527257245286
get_state_dump_mean0.005573527257245286
get_state_dump_median0.005573527257245286
get_state_dump_min0.005573527257245286
get_ui_image_max0.041907936110532375
get_ui_image_mean0.041907936110532375
get_ui_image_median0.041907936110532375
get_ui_image_min0.041907936110532375
in-drivable-lane_max2.699999999999995
in-drivable-lane_mean2.699999999999995
in-drivable-lane_min2.699999999999995
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 1.9357744252803708, "get_ui_image": 0.041907936110532375, "step_physics": 0.2174755092850305, "survival_time": 6.5999999999999845, "driven_lanedir": 1.0858933315577182, "get_state_dump": 0.005573527257245286, "get_robot_state": 0.00439651926657311, "sim_render-ego0": 0.004437982587886036, "get_duckie_state": 1.670722674606438e-06, "in-drivable-lane": 2.699999999999995, "deviation-heading": 2.004564146114112, "agent_compute-ego0": 0.032817370909497254, "complete-iteration": 0.323935325880696, "set_robot_commands": 0.0026285074707260705, "deviation-center-line": 0.24778230221916572, "driven_lanedir_consec": 1.0858933315577182, "sim_compute_sim_state": 0.012148292441117136, "sim_compute_performance-ego0": 0.0024476499485790284}}
set_robot_commands_max0.0026285074707260705
set_robot_commands_mean0.0026285074707260705
set_robot_commands_median0.0026285074707260705
set_robot_commands_min0.0026285074707260705
sim_compute_performance-ego0_max0.0024476499485790284
sim_compute_performance-ego0_mean0.0024476499485790284
sim_compute_performance-ego0_median0.0024476499485790284
sim_compute_performance-ego0_min0.0024476499485790284
sim_compute_sim_state_max0.012148292441117136
sim_compute_sim_state_mean0.012148292441117136
sim_compute_sim_state_median0.012148292441117136
sim_compute_sim_state_min0.012148292441117136
sim_render-ego0_max0.004437982587886036
sim_render-ego0_mean0.004437982587886036
sim_render-ego0_median0.004437982587886036
sim_render-ego0_min0.004437982587886036
simulation-passed1
step_physics_max0.2174755092850305
step_physics_mean0.2174755092850305
step_physics_median0.2174755092850305
step_physics_min0.2174755092850305
survival_time_max6.5999999999999845
survival_time_mean6.5999999999999845
survival_time_min6.5999999999999845
No reset possible
6624113732YU CHENBC Net V2aido-LF-sim-validationsim-2of4successnogpu-production-spot-2-050:09:16
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median12.919875991510793
survival_time_median59.99999999999873
deviation-center-line_median3.906946259794221
in-drivable-lane_median20.549999999999557


other stats
agent_compute-ego0_max0.054140171341653866
agent_compute-ego0_mean0.054140171341653866
agent_compute-ego0_median0.054140171341653866
agent_compute-ego0_min0.054140171341653866
complete-iteration_max0.23533892949157512
complete-iteration_mean0.23533892949157512
complete-iteration_median0.23533892949157512
complete-iteration_min0.23533892949157512
deviation-center-line_max3.906946259794221
deviation-center-line_mean3.906946259794221
deviation-center-line_min3.906946259794221
deviation-heading_max16.55125658520288
deviation-heading_mean16.55125658520288
deviation-heading_median16.55125658520288
deviation-heading_min16.55125658520288
driven_any_max21.257821221402764
driven_any_mean21.257821221402764
driven_any_median21.257821221402764
driven_any_min21.257821221402764
driven_lanedir_consec_max12.919875991510793
driven_lanedir_consec_mean12.919875991510793
driven_lanedir_consec_min12.919875991510793
driven_lanedir_max12.919875991510793
driven_lanedir_mean12.919875991510793
driven_lanedir_median12.919875991510793
driven_lanedir_min12.919875991510793
get_duckie_state_max1.325893163879547e-06
get_duckie_state_mean1.325893163879547e-06
get_duckie_state_median1.325893163879547e-06
get_duckie_state_min1.325893163879547e-06
get_robot_state_max0.0038815626593851032
get_robot_state_mean0.0038815626593851032
get_robot_state_median0.0038815626593851032
get_robot_state_min0.0038815626593851032
get_state_dump_max0.004773210228531684
get_state_dump_mean0.004773210228531684
get_state_dump_median0.004773210228531684
get_state_dump_min0.004773210228531684
get_ui_image_max0.027736556023781143
get_ui_image_mean0.027736556023781143
get_ui_image_median0.027736556023781143
get_ui_image_min0.027736556023781143
in-drivable-lane_max20.549999999999557
in-drivable-lane_mean20.549999999999557
in-drivable-lane_min20.549999999999557
per-episodes
details{"LF-norm-small_loop-000-ego0": {"driven_any": 21.257821221402764, "get_ui_image": 0.027736556023781143, "step_physics": 0.12966028558126, "survival_time": 59.99999999999873, "driven_lanedir": 12.919875991510793, "get_state_dump": 0.004773210228531684, "get_robot_state": 0.0038815626593851032, "sim_render-ego0": 0.003964115836836714, "get_duckie_state": 1.325893163879547e-06, "in-drivable-lane": 20.549999999999557, "deviation-heading": 16.55125658520288, "agent_compute-ego0": 0.054140171341653866, "complete-iteration": 0.23533892949157512, "set_robot_commands": 0.0025089994060506827, "deviation-center-line": 3.906946259794221, "driven_lanedir_consec": 12.919875991510793, "sim_compute_sim_state": 0.006475515707049342, "sim_compute_performance-ego0": 0.0021092347757306128}}
set_robot_commands_max0.0025089994060506827
set_robot_commands_mean0.0025089994060506827
set_robot_commands_median0.0025089994060506827
set_robot_commands_min0.0025089994060506827
sim_compute_performance-ego0_max0.0021092347757306128
sim_compute_performance-ego0_mean0.0021092347757306128
sim_compute_performance-ego0_median0.0021092347757306128
sim_compute_performance-ego0_min0.0021092347757306128
sim_compute_sim_state_max0.006475515707049342
sim_compute_sim_state_mean0.006475515707049342
sim_compute_sim_state_median0.006475515707049342
sim_compute_sim_state_min0.006475515707049342
sim_render-ego0_max0.003964115836836714
sim_render-ego0_mean0.003964115836836714
sim_render-ego0_median0.003964115836836714
sim_render-ego0_min0.003964115836836714
simulation-passed1
step_physics_max0.12966028558126
step_physics_mean0.12966028558126
step_physics_median0.12966028558126
step_physics_min0.12966028558126
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6621913911YU CHENCBC Net v2 - testaido-LF-sim-validationsim-0of4successnogpu-production-spot-2-050:10:13
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median22.765873144567063
survival_time_median59.99999999999873
deviation-center-line_median3.5102658701890066
in-drivable-lane_median9.899999999999832


other stats
agent_compute-ego0_max0.0905143011618812
agent_compute-ego0_mean0.0905143011618812
agent_compute-ego0_median0.0905143011618812
agent_compute-ego0_min0.0905143011618812
complete-iteration_max0.2811050276077359
complete-iteration_mean0.2811050276077359
complete-iteration_median0.2811050276077359
complete-iteration_min0.2811050276077359
deviation-center-line_max3.5102658701890066
deviation-center-line_mean3.5102658701890066
deviation-center-line_min3.5102658701890066
deviation-heading_max9.79871361700687
deviation-heading_mean9.79871361700687
deviation-heading_median9.79871361700687
deviation-heading_min9.79871361700687
driven_any_max27.686488436967753
driven_any_mean27.686488436967753
driven_any_median27.686488436967753
driven_any_min27.686488436967753
driven_lanedir_consec_max22.765873144567063
driven_lanedir_consec_mean22.765873144567063
driven_lanedir_consec_min22.765873144567063
driven_lanedir_max22.765873144567063
driven_lanedir_mean22.765873144567063
driven_lanedir_median22.765873144567063
driven_lanedir_min22.765873144567063
get_duckie_state_max1.2417220751709188e-06
get_duckie_state_mean1.2417220751709188e-06
get_duckie_state_median1.2417220751709188e-06
get_duckie_state_min1.2417220751709188e-06
get_robot_state_max0.0038477006701009655
get_robot_state_mean0.0038477006701009655
get_robot_state_median0.0038477006701009655
get_robot_state_min0.0038477006701009655
get_state_dump_max0.004709499265430968
get_state_dump_mean0.004709499265430968
get_state_dump_median0.004709499265430968
get_state_dump_min0.004709499265430968
get_ui_image_max0.029979187880427912
get_ui_image_mean0.029979187880427912
get_ui_image_median0.029979187880427912
get_ui_image_min0.029979187880427912
in-drivable-lane_max9.899999999999832
in-drivable-lane_mean9.899999999999832
in-drivable-lane_min9.899999999999832
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 27.686488436967753, "get_ui_image": 0.029979187880427912, "step_physics": 0.13362541683111262, "survival_time": 59.99999999999873, "driven_lanedir": 22.765873144567063, "get_state_dump": 0.004709499265430968, "get_robot_state": 0.0038477006701009655, "sim_render-ego0": 0.003966671342555927, "get_duckie_state": 1.2417220751709188e-06, "in-drivable-lane": 9.899999999999832, "deviation-heading": 9.79871361700687, "agent_compute-ego0": 0.0905143011618812, "complete-iteration": 0.2811050276077359, "set_robot_commands": 0.0024909502659908045, "deviation-center-line": 3.5102658701890066, "driven_lanedir_consec": 22.765873144567063, "sim_compute_sim_state": 0.009790956328850206, "sim_compute_performance-ego0": 0.0020922923663772215}}
set_robot_commands_max0.0024909502659908045
set_robot_commands_mean0.0024909502659908045
set_robot_commands_median0.0024909502659908045
set_robot_commands_min0.0024909502659908045
sim_compute_performance-ego0_max0.0020922923663772215
sim_compute_performance-ego0_mean0.0020922923663772215
sim_compute_performance-ego0_median0.0020922923663772215
sim_compute_performance-ego0_min0.0020922923663772215
sim_compute_sim_state_max0.009790956328850206
sim_compute_sim_state_mean0.009790956328850206
sim_compute_sim_state_median0.009790956328850206
sim_compute_sim_state_min0.009790956328850206
sim_render-ego0_max0.003966671342555927
sim_render-ego0_mean0.003966671342555927
sim_render-ego0_median0.003966671342555927
sim_render-ego0_min0.003966671342555927
simulation-passed1
step_physics_max0.13362541683111262
step_physics_mean0.13362541683111262
step_physics_median0.13362541683111262
step_physics_min0.13362541683111262
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6620313965YU CHENCBC Net v2 test - APR 3 BC TFdata + mar 28 anomalyaido-LFP-sim-validationsim-2of4successnogpu-production-spot-2-050:01:54
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median5.599999999999988
in-drivable-lane_median1.2999999999999954
driven_lanedir_consec_median1.648817809736461
deviation-center-line_median0.3637224951708319


other stats
agent_compute-ego0_max0.0983734341849268
agent_compute-ego0_mean0.0983734341849268
agent_compute-ego0_median0.0983734341849268
agent_compute-ego0_min0.0983734341849268
complete-iteration_max0.3003062590033607
complete-iteration_mean0.3003062590033607
complete-iteration_median0.3003062590033607
complete-iteration_min0.3003062590033607
deviation-center-line_max0.3637224951708319
deviation-center-line_mean0.3637224951708319
deviation-center-line_min0.3637224951708319
deviation-heading_max0.99159044721433
deviation-heading_mean0.99159044721433
deviation-heading_median0.99159044721433
deviation-heading_min0.99159044721433
driven_any_max1.9271497736844625
driven_any_mean1.9271497736844625
driven_any_median1.9271497736844625
driven_any_min1.9271497736844625
driven_lanedir_consec_max1.648817809736461
driven_lanedir_consec_mean1.648817809736461
driven_lanedir_consec_min1.648817809736461
driven_lanedir_max1.648817809736461
driven_lanedir_mean1.648817809736461
driven_lanedir_median1.648817809736461
driven_lanedir_min1.648817809736461
get_duckie_state_max0.027560510466584063
get_duckie_state_mean0.027560510466584063
get_duckie_state_median0.027560510466584063
get_duckie_state_min0.027560510466584063
get_robot_state_max0.004035415902601934
get_robot_state_mean0.004035415902601934
get_robot_state_median0.004035415902601934
get_robot_state_min0.004035415902601934
get_state_dump_max0.00974459563736367
get_state_dump_mean0.00974459563736367
get_state_dump_median0.00974459563736367
get_state_dump_min0.00974459563736367
get_ui_image_max0.03486603973186122
get_ui_image_mean0.03486603973186122
get_ui_image_median0.03486603973186122
get_ui_image_min0.03486603973186122
in-drivable-lane_max1.2999999999999954
in-drivable-lane_mean1.2999999999999954
in-drivable-lane_min1.2999999999999954
per-episodes
details{"LFP-norm-loop-000-ego0": {"driven_any": 1.9271497736844625, "get_ui_image": 0.03486603973186122, "step_physics": 0.10786263921619516, "survival_time": 5.599999999999988, "driven_lanedir": 1.648817809736461, "get_state_dump": 0.00974459563736367, "get_robot_state": 0.004035415902601934, "sim_render-ego0": 0.004293861642348028, "get_duckie_state": 0.027560510466584063, "in-drivable-lane": 1.2999999999999954, "deviation-heading": 0.99159044721433, "agent_compute-ego0": 0.0983734341849268, "complete-iteration": 0.3003062590033607, "set_robot_commands": 0.00265893893959248, "deviation-center-line": 0.3637224951708319, "driven_lanedir_consec": 1.648817809736461, "sim_compute_sim_state": 0.008575939499171435, "sim_compute_performance-ego0": 0.0022252724233981784}}
set_robot_commands_max0.00265893893959248
set_robot_commands_mean0.00265893893959248
set_robot_commands_median0.00265893893959248
set_robot_commands_min0.00265893893959248
sim_compute_performance-ego0_max0.0022252724233981784
sim_compute_performance-ego0_mean0.0022252724233981784
sim_compute_performance-ego0_median0.0022252724233981784
sim_compute_performance-ego0_min0.0022252724233981784
sim_compute_sim_state_max0.008575939499171435
sim_compute_sim_state_mean0.008575939499171435
sim_compute_sim_state_median0.008575939499171435
sim_compute_sim_state_min0.008575939499171435
sim_render-ego0_max0.004293861642348028
sim_render-ego0_mean0.004293861642348028
sim_render-ego0_median0.004293861642348028
sim_render-ego0_min0.004293861642348028
simulation-passed1
step_physics_max0.10786263921619516
step_physics_mean0.10786263921619516
step_physics_median0.10786263921619516
step_physics_min0.10786263921619516
survival_time_max5.599999999999988
survival_time_mean5.599999999999988
survival_time_min5.599999999999988
No reset possible
6616313994Frank (Chude) QianΒ πŸ‡¨πŸ‡¦CBC Net - MixTraining - Expert LF Human LFP - Best Lossaido-LFP-sim-validationsim-3of4successnogpu-production-spot-2-050:07:04
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median30.250000000000295
in-drivable-lane_median8.750000000000078
driven_lanedir_consec_median8.508253107502874
deviation-center-line_median1.5319563068653037


other stats
agent_compute-ego0_max0.05707476005302404
agent_compute-ego0_mean0.05707476005302404
agent_compute-ego0_median0.05707476005302404
agent_compute-ego0_min0.05707476005302404
complete-iteration_max0.32986195134644464
complete-iteration_mean0.32986195134644464
complete-iteration_median0.32986195134644464
complete-iteration_min0.32986195134644464
deviation-center-line_max1.5319563068653037
deviation-center-line_mean1.5319563068653037
deviation-center-line_min1.5319563068653037
deviation-heading_max7.917269423375053
deviation-heading_mean7.917269423375053
deviation-heading_median7.917269423375053
deviation-heading_min7.917269423375053
driven_any_max13.464170547328877
driven_any_mean13.464170547328877
driven_any_median13.464170547328877
driven_any_min13.464170547328877
driven_lanedir_consec_max8.508253107502874
driven_lanedir_consec_mean8.508253107502874
driven_lanedir_consec_min8.508253107502874
driven_lanedir_max8.508253107502874
driven_lanedir_mean8.508253107502874
driven_lanedir_median8.508253107502874
driven_lanedir_min8.508253107502874
get_duckie_state_max0.0225318969279626
get_duckie_state_mean0.0225318969279626
get_duckie_state_median0.0225318969279626
get_duckie_state_min0.0225318969279626
get_robot_state_max0.004038073835593245
get_robot_state_mean0.004038073835593245
get_robot_state_median0.004038073835593245
get_robot_state_min0.004038073835593245
get_state_dump_max0.00866249646290694
get_state_dump_mean0.00866249646290694
get_state_dump_median0.00866249646290694
get_state_dump_min0.00866249646290694
get_ui_image_max0.03907558548175069
get_ui_image_mean0.03907558548175069
get_ui_image_median0.03907558548175069
get_ui_image_min0.03907558548175069
in-drivable-lane_max8.750000000000078
in-drivable-lane_mean8.750000000000078
in-drivable-lane_min8.750000000000078
per-episodes
details{"LFP-norm-techtrack-000-ego0": {"driven_any": 13.464170547328877, "get_ui_image": 0.03907558548175069, "step_physics": 0.17572364870077706, "survival_time": 30.250000000000295, "driven_lanedir": 8.508253107502874, "get_state_dump": 0.00866249646290694, "get_robot_state": 0.004038073835593245, "sim_render-ego0": 0.004164433321937082, "get_duckie_state": 0.0225318969279626, "in-drivable-lane": 8.750000000000078, "deviation-heading": 7.917269423375053, "agent_compute-ego0": 0.05707476005302404, "complete-iteration": 0.32986195134644464, "set_robot_commands": 0.0025803287430564955, "deviation-center-line": 1.5319563068653037, "driven_lanedir_consec": 8.508253107502874, "sim_compute_sim_state": 0.013734246637954963, "sim_compute_performance-ego0": 0.0021713688822075873}}
set_robot_commands_max0.0025803287430564955
set_robot_commands_mean0.0025803287430564955
set_robot_commands_median0.0025803287430564955
set_robot_commands_min0.0025803287430564955
sim_compute_performance-ego0_max0.0021713688822075873
sim_compute_performance-ego0_mean0.0021713688822075873
sim_compute_performance-ego0_median0.0021713688822075873
sim_compute_performance-ego0_min0.0021713688822075873
sim_compute_sim_state_max0.013734246637954963
sim_compute_sim_state_mean0.013734246637954963
sim_compute_sim_state_median0.013734246637954963
sim_compute_sim_state_min0.013734246637954963
sim_render-ego0_max0.004164433321937082
sim_render-ego0_mean0.004164433321937082
sim_render-ego0_median0.004164433321937082
sim_render-ego0_min0.004164433321937082
simulation-passed1
step_physics_max0.17572364870077706
step_physics_mean0.17572364870077706
step_physics_median0.17572364870077706
step_physics_min0.17572364870077706
survival_time_max30.250000000000295
survival_time_mean30.250000000000295
survival_time_min30.250000000000295
No reset possible
6615813998Frank (Chude) QianΒ πŸ‡¨πŸ‡¦baseline-behavior-cloning New Datasetaido-LFP-sim-validationsim-1of4host-errornogpu-production-spot-2-050:00:41
The container "solut [...]
The container "solution-ego0" exited with code 139.


Error code 139 means GPU memory error.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6614713578MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFV_multi-sim-validation402host-errornogpu-production-spot-2-050:01:10
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego1" aborted with the following error:

error in ego1 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(0, experiment_idx=0, checkpoint_idx=0, logger=context)
              ||   File "/submission/model.py", line 42, in __init__
              ||     dummy_env = wrap_env(config["env_config"], extra_config={
              ||   File "/submission/duckietown_utils/env.py", line 46, in wrap_env
              ||     env = SegmentObsWrapper(env, model=extra_config['model'])
              ||   File "/submission/duckietown_utils/wrappers/SegmentObsWrapper.py", line 43, in __init__
              ||     self.model.cuda()
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in cuda
              ||     return self._apply(lambda t: t.cuda(device))
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 381, in _apply
              ||     param_applied = fn(param)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in <lambda>
              ||     return self._apply(lambda t: t.cuda(device))
              || RuntimeError: CUDA error: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 274, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego1" aborted with the following error:

error in ego1 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(0, experiment_idx=0, checkpoint_idx=0, logger=context)
              ||   File "/submission/model.py", line 42, in __init__
              ||     dummy_env = wrap_env(config["env_config"], extra_config={
              ||   File "/submission/duckietown_utils/env.py", line 46, in wrap_env
              ||     env = SegmentObsWrapper(env, model=extra_config['model'])
              ||   File "/submission/duckietown_utils/wrappers/SegmentObsWrapper.py", line 43, in __init__
              ||     self.model.cuda()
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in cuda
              ||     return self._apply(lambda t: t.cuda(device))
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 381, in _apply
              ||     param_applied = fn(param)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in <lambda>
              ||     return self._apply(lambda t: t.cuda(device))
              || RuntimeError: CUDA error: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6614113578MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFV_multi-sim-validation402host-errornogpu-production-spot-2-050:01:06
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego1" aborted with the following error:

error in ego1 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(0, experiment_idx=0, checkpoint_idx=0, logger=context)
              ||   File "/submission/model.py", line 42, in __init__
              ||     dummy_env = wrap_env(config["env_config"], extra_config={
              ||   File "/submission/duckietown_utils/env.py", line 46, in wrap_env
              ||     env = SegmentObsWrapper(env, model=extra_config['model'])
              ||   File "/submission/duckietown_utils/wrappers/SegmentObsWrapper.py", line 43, in __init__
              ||     self.model.cuda()
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in cuda
              ||     return self._apply(lambda t: t.cuda(device))
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 381, in _apply
              ||     param_applied = fn(param)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in <lambda>
              ||     return self._apply(lambda t: t.cuda(device))
              || RuntimeError: CUDA error: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 274, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego1" aborted with the following error:

error in ego1 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(0, experiment_idx=0, checkpoint_idx=0, logger=context)
              ||   File "/submission/model.py", line 42, in __init__
              ||     dummy_env = wrap_env(config["env_config"], extra_config={
              ||   File "/submission/duckietown_utils/env.py", line 46, in wrap_env
              ||     env = SegmentObsWrapper(env, model=extra_config['model'])
              ||   File "/submission/duckietown_utils/wrappers/SegmentObsWrapper.py", line 43, in __init__
              ||     self.model.cuda()
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in cuda
              ||     return self._apply(lambda t: t.cuda(device))
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 381, in _apply
              ||     param_applied = fn(param)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in <lambda>
              ||     return self._apply(lambda t: t.cuda(device))
              || RuntimeError: CUDA error: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6613314034YU CHENCBC V2, mar28_apr6 bc, mar31_apr6 anomaly aido-LFP-sim-validationsim-0of4successnogpu-production-spot-2-050:01:27
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median2.549999999999999
in-drivable-lane_median0.0
driven_lanedir_consec_median0.5032534291850376
deviation-center-line_median0.18135110792235573


other stats
agent_compute-ego0_max0.10391685595879188
agent_compute-ego0_mean0.10391685595879188
agent_compute-ego0_median0.10391685595879188
agent_compute-ego0_min0.10391685595879188
complete-iteration_max0.379687277170328
complete-iteration_mean0.379687277170328
complete-iteration_median0.379687277170328
complete-iteration_min0.379687277170328
deviation-center-line_max0.18135110792235573
deviation-center-line_mean0.18135110792235573
deviation-center-line_min0.18135110792235573
deviation-heading_max0.6286963328260954
deviation-heading_mean0.6286963328260954
deviation-heading_median0.6286963328260954
deviation-heading_min0.6286963328260954
driven_any_max0.5157920411063238
driven_any_mean0.5157920411063238
driven_any_median0.5157920411063238
driven_any_min0.5157920411063238
driven_lanedir_consec_max0.5032534291850376
driven_lanedir_consec_mean0.5032534291850376
driven_lanedir_consec_min0.5032534291850376
driven_lanedir_max0.5032534291850376
driven_lanedir_mean0.5032534291850376
driven_lanedir_median0.5032534291850376
driven_lanedir_min0.5032534291850376
get_duckie_state_max0.023194918265709512
get_duckie_state_mean0.023194918265709512
get_duckie_state_median0.023194918265709512
get_duckie_state_min0.023194918265709512
get_robot_state_max0.004112867208627554
get_robot_state_mean0.004112867208627554
get_robot_state_median0.004112867208627554
get_robot_state_min0.004112867208627554
get_state_dump_max0.009331061289860653
get_state_dump_mean0.009331061289860653
get_state_dump_median0.009331061289860653
get_state_dump_min0.009331061289860653
get_ui_image_max0.04393269007022564
get_ui_image_mean0.04393269007022564
get_ui_image_median0.04393269007022564
get_ui_image_min0.04393269007022564
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFP-norm-zigzag-000-ego0": {"driven_any": 0.5157920411063238, "get_ui_image": 0.04393269007022564, "step_physics": 0.1753776211004991, "survival_time": 2.549999999999999, "driven_lanedir": 0.5032534291850376, "get_state_dump": 0.009331061289860653, "get_robot_state": 0.004112867208627554, "sim_render-ego0": 0.004400876852182241, "get_duckie_state": 0.023194918265709512, "in-drivable-lane": 0.0, "deviation-heading": 0.6286963328260954, "agent_compute-ego0": 0.10391685595879188, "complete-iteration": 0.379687277170328, "set_robot_commands": 0.0026758588277376615, "deviation-center-line": 0.18135110792235573, "driven_lanedir_consec": 0.5032534291850376, "sim_compute_sim_state": 0.010175072229825534, "sim_compute_performance-ego0": 0.002458068040701059}}
set_robot_commands_max0.0026758588277376615
set_robot_commands_mean0.0026758588277376615
set_robot_commands_median0.0026758588277376615
set_robot_commands_min0.0026758588277376615
sim_compute_performance-ego0_max0.002458068040701059
sim_compute_performance-ego0_mean0.002458068040701059
sim_compute_performance-ego0_median0.002458068040701059
sim_compute_performance-ego0_min0.002458068040701059
sim_compute_sim_state_max0.010175072229825534
sim_compute_sim_state_mean0.010175072229825534
sim_compute_sim_state_median0.010175072229825534
sim_compute_sim_state_min0.010175072229825534
sim_render-ego0_max0.004400876852182241
sim_render-ego0_mean0.004400876852182241
sim_render-ego0_median0.004400876852182241
sim_render-ego0_min0.004400876852182241
simulation-passed1
step_physics_max0.1753776211004991
step_physics_mean0.1753776211004991
step_physics_median0.1753776211004991
step_physics_min0.1753776211004991
survival_time_max2.549999999999999
survival_time_mean2.549999999999999
survival_time_min2.549999999999999
No reset possible
6611214036YU CHENCBC V2 non dropout comparsion, mar28_apr6 bc, mar31_apr6 anomaly aido-LFP-sim-validationsim-3of4successnogpu-production-spot-2-050:05:04
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median11.55000000000003
in-drivable-lane_median4.70000000000001
driven_lanedir_consec_median2.028671565560625
deviation-center-line_median0.495405540486016


other stats
agent_compute-ego0_max0.10365198912291693
agent_compute-ego0_mean0.10365198912291693
agent_compute-ego0_median0.10365198912291693
agent_compute-ego0_min0.10365198912291693
complete-iteration_max0.389828661392475
complete-iteration_mean0.389828661392475
complete-iteration_median0.389828661392475
complete-iteration_min0.389828661392475
deviation-center-line_max0.495405540486016
deviation-center-line_mean0.495405540486016
deviation-center-line_min0.495405540486016
deviation-heading_max2.385005417853838
deviation-heading_mean2.385005417853838
deviation-heading_median2.385005417853838
deviation-heading_min2.385005417853838
driven_any_max3.7820959340890985
driven_any_mean3.7820959340890985
driven_any_median3.7820959340890985
driven_any_min3.7820959340890985
driven_lanedir_consec_max2.028671565560625
driven_lanedir_consec_mean2.028671565560625
driven_lanedir_consec_min2.028671565560625
driven_lanedir_max2.028671565560625
driven_lanedir_mean2.028671565560625
driven_lanedir_median2.028671565560625
driven_lanedir_min2.028671565560625
get_duckie_state_max0.02384429039626286
get_duckie_state_mean0.02384429039626286
get_duckie_state_median0.02384429039626286
get_duckie_state_min0.02384429039626286
get_robot_state_max0.004396968874438056
get_robot_state_mean0.004396968874438056
get_robot_state_median0.004396968874438056
get_robot_state_min0.004396968874438056
get_state_dump_max0.00944244964369412
get_state_dump_mean0.00944244964369412
get_state_dump_median0.00944244964369412
get_state_dump_min0.00944244964369412
get_ui_image_max0.04117613105938352
get_ui_image_mean0.04117613105938352
get_ui_image_median0.04117613105938352
get_ui_image_min0.04117613105938352
in-drivable-lane_max4.70000000000001
in-drivable-lane_mean4.70000000000001
in-drivable-lane_min4.70000000000001
per-episodes
details{"LFP-norm-techtrack-000-ego0": {"driven_any": 3.7820959340890985, "get_ui_image": 0.04117613105938352, "step_physics": 0.18330107064082704, "survival_time": 11.55000000000003, "driven_lanedir": 2.028671565560625, "get_state_dump": 0.00944244964369412, "get_robot_state": 0.004396968874438056, "sim_render-ego0": 0.004458588772806628, "get_duckie_state": 0.02384429039626286, "in-drivable-lane": 4.70000000000001, "deviation-heading": 2.385005417853838, "agent_compute-ego0": 0.10365198912291693, "complete-iteration": 0.389828661392475, "set_robot_commands": 0.0027037427343171217, "deviation-center-line": 0.495405540486016, "driven_lanedir_consec": 2.028671565560625, "sim_compute_sim_state": 0.014320182389226454, "sim_compute_performance-ego0": 0.0024135123039114065}}
set_robot_commands_max0.0027037427343171217
set_robot_commands_mean0.0027037427343171217
set_robot_commands_median0.0027037427343171217
set_robot_commands_min0.0027037427343171217
sim_compute_performance-ego0_max0.0024135123039114065
sim_compute_performance-ego0_mean0.0024135123039114065
sim_compute_performance-ego0_median0.0024135123039114065
sim_compute_performance-ego0_min0.0024135123039114065
sim_compute_sim_state_max0.014320182389226454
sim_compute_sim_state_mean0.014320182389226454
sim_compute_sim_state_median0.014320182389226454
sim_compute_sim_state_min0.014320182389226454
sim_render-ego0_max0.004458588772806628
sim_render-ego0_mean0.004458588772806628
sim_render-ego0_median0.004458588772806628
sim_render-ego0_min0.004458588772806628
simulation-passed1
step_physics_max0.18330107064082704
step_physics_mean0.18330107064082704
step_physics_median0.18330107064082704
step_physics_min0.18330107064082704
survival_time_max11.55000000000003
survival_time_mean11.55000000000003
survival_time_min11.55000000000003
No reset possible
6610013511AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFP-sim-validationsim-3of4failednogpu-production-spot-2-050:02:39
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 216, in main
    raise InvalidSubmission(msg)
duckietown_challenges.exceptions.InvalidSubmission: Timeout during connection to ego0: <SignalTimeout in state: 2>
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6609513511AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFP-sim-validationsim-3of4failednogpu-production-spot-2-050:00:41
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 275, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6609313518AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFV_multi-sim-validation403failednogpu-production-spot-2-050:00:59
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego1" aborted with the following error:

error in ego1 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 275, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6608513518AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFV_multi-sim-validation402failednogpu-production-spot-2-050:01:06
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego1" aborted with the following error:

error in ego1 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 275, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6607813541AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ί3090aido-LFP-sim-validationsim-0of4successnogpu-production-spot-2-050:01:27
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median2.25
in-drivable-lane_median0.0
driven_lanedir_consec_median0.4991245329254509
deviation-center-line_median0.2072516336691705


other stats
agent_compute-ego0_max0.016768854597340458
agent_compute-ego0_mean0.016768854597340458
agent_compute-ego0_median0.016768854597340458
agent_compute-ego0_min0.016768854597340458
complete-iteration_max0.3058849624965502
complete-iteration_mean0.3058849624965502
complete-iteration_median0.3058849624965502
complete-iteration_min0.3058849624965502
deviation-center-line_max0.2072516336691705
deviation-center-line_mean0.2072516336691705
deviation-center-line_min0.2072516336691705
deviation-heading_max0.9383715258029384
deviation-heading_mean0.9383715258029384
deviation-heading_median0.9383715258029384
deviation-heading_min0.9383715258029384
driven_any_max0.549473851217191
driven_any_mean0.549473851217191
driven_any_median0.549473851217191
driven_any_min0.549473851217191
driven_lanedir_consec_max0.4991245329254509
driven_lanedir_consec_mean0.4991245329254509
driven_lanedir_consec_min0.4991245329254509
driven_lanedir_max0.4991245329254509
driven_lanedir_mean0.4991245329254509
driven_lanedir_median0.4991245329254509
driven_lanedir_min0.4991245329254509
get_duckie_state_max0.023466644079788872
get_duckie_state_mean0.023466644079788872
get_duckie_state_median0.023466644079788872
get_duckie_state_min0.023466644079788872
get_robot_state_max0.004355497982191003
get_robot_state_mean0.004355497982191003
get_robot_state_median0.004355497982191003
get_robot_state_min0.004355497982191003
get_state_dump_max0.009024169134057089
get_state_dump_mean0.009024169134057089
get_state_dump_median0.009024169134057089
get_state_dump_min0.009024169134057089
get_ui_image_max0.04293083626291026
get_ui_image_mean0.04293083626291026
get_ui_image_median0.04293083626291026
get_ui_image_min0.04293083626291026
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFP-norm-zigzag-000-ego0": {"driven_any": 0.549473851217191, "get_ui_image": 0.04293083626291026, "step_physics": 0.18945449331532355, "survival_time": 2.25, "driven_lanedir": 0.4991245329254509, "get_state_dump": 0.009024169134057089, "get_robot_state": 0.004355497982191003, "sim_render-ego0": 0.004388482674308445, "get_duckie_state": 0.023466644079788872, "in-drivable-lane": 0.0, "deviation-heading": 0.9383715258029384, "agent_compute-ego0": 0.016768854597340458, "complete-iteration": 0.3058849624965502, "set_robot_commands": 0.002795981324237326, "deviation-center-line": 0.2072516336691705, "driven_lanedir_consec": 0.4991245329254509, "sim_compute_sim_state": 0.010199774866518766, "sim_compute_performance-ego0": 0.00239370698514192}}
set_robot_commands_max0.002795981324237326
set_robot_commands_mean0.002795981324237326
set_robot_commands_median0.002795981324237326
set_robot_commands_min0.002795981324237326
sim_compute_performance-ego0_max0.00239370698514192
sim_compute_performance-ego0_mean0.00239370698514192
sim_compute_performance-ego0_median0.00239370698514192
sim_compute_performance-ego0_min0.00239370698514192
sim_compute_sim_state_max0.010199774866518766
sim_compute_sim_state_mean0.010199774866518766
sim_compute_sim_state_median0.010199774866518766
sim_compute_sim_state_min0.010199774866518766
sim_render-ego0_max0.004388482674308445
sim_render-ego0_mean0.004388482674308445
sim_render-ego0_median0.004388482674308445
sim_render-ego0_min0.004388482674308445
simulation-passed1
step_physics_max0.18945449331532355
step_physics_mean0.18945449331532355
step_physics_median0.18945449331532355
step_physics_min0.18945449331532355
survival_time_max2.25
survival_time_mean2.25
survival_time_min2.25
No reset possible
6607213541AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ί3090aido-LFP-sim-validationsim-0of4successnogpu-production-spot-2-050:01:20
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median2.2
in-drivable-lane_median0.15000000000000002
driven_lanedir_consec_median0.42528407828049586
deviation-center-line_median0.1493602954300961


other stats
agent_compute-ego0_max0.017744281556871203
agent_compute-ego0_mean0.017744281556871203
agent_compute-ego0_median0.017744281556871203
agent_compute-ego0_min0.017744281556871203
complete-iteration_max0.29267679320441353
complete-iteration_mean0.29267679320441353
complete-iteration_median0.29267679320441353
complete-iteration_min0.29267679320441353
deviation-center-line_max0.1493602954300961
deviation-center-line_mean0.1493602954300961
deviation-center-line_min0.1493602954300961
deviation-heading_max0.9262799993963242
deviation-heading_mean0.9262799993963242
deviation-heading_median0.9262799993963242
deviation-heading_min0.9262799993963242
driven_any_max0.543563366765026
driven_any_mean0.543563366765026
driven_any_median0.543563366765026
driven_any_min0.543563366765026
driven_lanedir_consec_max0.42528407828049586
driven_lanedir_consec_mean0.42528407828049586
driven_lanedir_consec_min0.42528407828049586
driven_lanedir_max0.42528407828049586
driven_lanedir_mean0.42528407828049586
driven_lanedir_median0.42528407828049586
driven_lanedir_min0.42528407828049586
get_duckie_state_max0.023765770594278972
get_duckie_state_mean0.023765770594278972
get_duckie_state_median0.023765770594278972
get_duckie_state_min0.023765770594278972
get_robot_state_max0.004258415434095594
get_robot_state_mean0.004258415434095594
get_robot_state_median0.004258415434095594
get_robot_state_min0.004258415434095594
get_state_dump_max0.009386920928955078
get_state_dump_mean0.009386920928955078
get_state_dump_median0.009386920928955078
get_state_dump_min0.009386920928955078
get_ui_image_max0.043997769885592994
get_ui_image_mean0.043997769885592994
get_ui_image_median0.043997769885592994
get_ui_image_min0.043997769885592994
in-drivable-lane_max0.15000000000000002
in-drivable-lane_mean0.15000000000000002
in-drivable-lane_min0.15000000000000002
per-episodes
details{"LFP-norm-zigzag-000-ego0": {"driven_any": 0.543563366765026, "get_ui_image": 0.043997769885592994, "step_physics": 0.17399358749389648, "survival_time": 2.2, "driven_lanedir": 0.42528407828049586, "get_state_dump": 0.009386920928955078, "get_robot_state": 0.004258415434095594, "sim_render-ego0": 0.004433637195163303, "get_duckie_state": 0.023765770594278972, "in-drivable-lane": 0.15000000000000002, "deviation-heading": 0.9262799993963242, "agent_compute-ego0": 0.017744281556871203, "complete-iteration": 0.29267679320441353, "set_robot_commands": 0.0028001043531629775, "deviation-center-line": 0.1493602954300961, "driven_lanedir_consec": 0.42528407828049586, "sim_compute_sim_state": 0.009767707188924153, "sim_compute_performance-ego0": 0.0024129867553710936}}
set_robot_commands_max0.0028001043531629775
set_robot_commands_mean0.0028001043531629775
set_robot_commands_median0.0028001043531629775
set_robot_commands_min0.0028001043531629775
sim_compute_performance-ego0_max0.0024129867553710936
sim_compute_performance-ego0_mean0.0024129867553710936
sim_compute_performance-ego0_median0.0024129867553710936
sim_compute_performance-ego0_min0.0024129867553710936
sim_compute_sim_state_max0.009767707188924153
sim_compute_sim_state_mean0.009767707188924153
sim_compute_sim_state_median0.009767707188924153
sim_compute_sim_state_min0.009767707188924153
sim_render-ego0_max0.004433637195163303
sim_render-ego0_mean0.004433637195163303
sim_render-ego0_median0.004433637195163303
sim_render-ego0_min0.004433637195163303
simulation-passed1
step_physics_max0.17399358749389648
step_physics_mean0.17399358749389648
step_physics_median0.17399358749389648
step_physics_min0.17399358749389648
survival_time_max2.2
survival_time_mean2.2
survival_time_min2.2
No reset possible
6605413571MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFP-sim-validationsim-0of4successnogpu-production-spot-2-050:04:22
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median14.950000000000076
in-drivable-lane_median1.950000000000005
driven_lanedir_consec_median5.475121787668234
deviation-center-line_median0.8762724968628142


other stats
agent_compute-ego0_max0.047296969095865886
agent_compute-ego0_mean0.047296969095865886
agent_compute-ego0_median0.047296969095865886
agent_compute-ego0_min0.047296969095865886
complete-iteration_max0.3556336482365926
complete-iteration_mean0.3556336482365926
complete-iteration_median0.3556336482365926
complete-iteration_min0.3556336482365926
deviation-center-line_max0.8762724968628142
deviation-center-line_mean0.8762724968628142
deviation-center-line_min0.8762724968628142
deviation-heading_max3.6195115630932793
deviation-heading_mean3.6195115630932793
deviation-heading_median3.6195115630932793
deviation-heading_min3.6195115630932793
driven_any_max6.7696470371732405
driven_any_mean6.7696470371732405
driven_any_median6.7696470371732405
driven_any_min6.7696470371732405
driven_lanedir_consec_max5.475121787668234
driven_lanedir_consec_mean5.475121787668234
driven_lanedir_consec_min5.475121787668234
driven_lanedir_max5.475121787668234
driven_lanedir_mean5.475121787668234
driven_lanedir_median5.475121787668234
driven_lanedir_min5.475121787668234
get_duckie_state_max0.02206814686457316
get_duckie_state_mean0.02206814686457316
get_duckie_state_median0.02206814686457316
get_duckie_state_min0.02206814686457316
get_robot_state_max0.004018030961354574
get_robot_state_mean0.004018030961354574
get_robot_state_median0.004018030961354574
get_robot_state_min0.004018030961354574
get_state_dump_max0.008714563846588134
get_state_dump_mean0.008714563846588134
get_state_dump_median0.008714563846588134
get_state_dump_min0.008714563846588134
get_ui_image_max0.04145502249399821
get_ui_image_mean0.04145502249399821
get_ui_image_median0.04145502249399821
get_ui_image_min0.04145502249399821
in-drivable-lane_max1.950000000000005
in-drivable-lane_mean1.950000000000005
in-drivable-lane_min1.950000000000005
per-episodes
details{"LFP-norm-zigzag-000-ego0": {"driven_any": 6.7696470371732405, "get_ui_image": 0.04145502249399821, "step_physics": 0.20929657220840453, "survival_time": 14.950000000000076, "driven_lanedir": 5.475121787668234, "get_state_dump": 0.008714563846588134, "get_robot_state": 0.004018030961354574, "sim_render-ego0": 0.004088871479034424, "get_duckie_state": 0.02206814686457316, "in-drivable-lane": 1.950000000000005, "deviation-heading": 3.6195115630932793, "agent_compute-ego0": 0.047296969095865886, "complete-iteration": 0.3556336482365926, "set_robot_commands": 0.002363018989562988, "deviation-center-line": 0.8762724968628142, "driven_lanedir_consec": 5.475121787668234, "sim_compute_sim_state": 0.01407595157623291, "sim_compute_performance-ego0": 0.002151618003845215}}
set_robot_commands_max0.002363018989562988
set_robot_commands_mean0.002363018989562988
set_robot_commands_median0.002363018989562988
set_robot_commands_min0.002363018989562988
sim_compute_performance-ego0_max0.002151618003845215
sim_compute_performance-ego0_mean0.002151618003845215
sim_compute_performance-ego0_median0.002151618003845215
sim_compute_performance-ego0_min0.002151618003845215
sim_compute_sim_state_max0.01407595157623291
sim_compute_sim_state_mean0.01407595157623291
sim_compute_sim_state_median0.01407595157623291
sim_compute_sim_state_min0.01407595157623291
sim_render-ego0_max0.004088871479034424
sim_render-ego0_mean0.004088871479034424
sim_render-ego0_median0.004088871479034424
sim_render-ego0_min0.004088871479034424
simulation-passed1
step_physics_max0.20929657220840453
step_physics_mean0.20929657220840453
step_physics_median0.20929657220840453
step_physics_min0.20929657220840453
survival_time_max14.950000000000076
survival_time_mean14.950000000000076
survival_time_min14.950000000000076
No reset possible
6602913943YU CHENCBC Net v2 test - added mar 31 anomaly + mar 28 bc_v1aido-LF-sim-validationsim-0of4successnogpu-production-spot-2-050:10:47
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median21.03123742098829
survival_time_median59.99999999999873
deviation-center-line_median4.085070099551673
in-drivable-lane_median13.899999999999633


other stats
agent_compute-ego0_max0.09632852274015682
agent_compute-ego0_mean0.09632852274015682
agent_compute-ego0_median0.09632852274015682
agent_compute-ego0_min0.09632852274015682
complete-iteration_max0.29713477679434463
complete-iteration_mean0.29713477679434463
complete-iteration_median0.29713477679434463
complete-iteration_min0.29713477679434463
deviation-center-line_max4.085070099551673
deviation-center-line_mean4.085070099551673
deviation-center-line_min4.085070099551673
deviation-heading_max7.723570637056353
deviation-heading_mean7.723570637056353
deviation-heading_median7.723570637056353
deviation-heading_min7.723570637056353
driven_any_max27.49417814300743
driven_any_mean27.49417814300743
driven_any_median27.49417814300743
driven_any_min27.49417814300743
driven_lanedir_consec_max21.03123742098829
driven_lanedir_consec_mean21.03123742098829
driven_lanedir_consec_min21.03123742098829
driven_lanedir_max21.03123742098829
driven_lanedir_mean21.03123742098829
driven_lanedir_median21.03123742098829
driven_lanedir_min21.03123742098829
get_duckie_state_max1.4658474505295067e-06
get_duckie_state_mean1.4658474505295067e-06
get_duckie_state_median1.4658474505295067e-06
get_duckie_state_min1.4658474505295067e-06
get_robot_state_max0.004117984755847178
get_robot_state_mean0.004117984755847178
get_robot_state_median0.004117984755847178
get_robot_state_min0.004117984755847178
get_state_dump_max0.005179120340911078
get_state_dump_mean0.005179120340911078
get_state_dump_median0.005179120340911078
get_state_dump_min0.005179120340911078
get_ui_image_max0.03164853700293192
get_ui_image_mean0.03164853700293192
get_ui_image_median0.03164853700293192
get_ui_image_min0.03164853700293192
in-drivable-lane_max13.899999999999633
in-drivable-lane_mean13.899999999999633
in-drivable-lane_min13.899999999999633
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 27.49417814300743, "get_ui_image": 0.03164853700293192, "step_physics": 0.1404417317475407, "survival_time": 59.99999999999873, "driven_lanedir": 21.03123742098829, "get_state_dump": 0.005179120340911078, "get_robot_state": 0.004117984755847178, "sim_render-ego0": 0.004164347739938296, "get_duckie_state": 1.4658474505295067e-06, "in-drivable-lane": 13.899999999999633, "deviation-heading": 7.723570637056353, "agent_compute-ego0": 0.09632852274015682, "complete-iteration": 0.29713477679434463, "set_robot_commands": 0.0025831499663518925, "deviation-center-line": 4.085070099551673, "driven_lanedir_consec": 21.03123742098829, "sim_compute_sim_state": 0.010358887647808243, "sim_compute_performance-ego0": 0.002220633027952577}}
set_robot_commands_max0.0025831499663518925
set_robot_commands_mean0.0025831499663518925
set_robot_commands_median0.0025831499663518925
set_robot_commands_min0.0025831499663518925
sim_compute_performance-ego0_max0.002220633027952577
sim_compute_performance-ego0_mean0.002220633027952577
sim_compute_performance-ego0_median0.002220633027952577
sim_compute_performance-ego0_min0.002220633027952577
sim_compute_sim_state_max0.010358887647808243
sim_compute_sim_state_mean0.010358887647808243
sim_compute_sim_state_median0.010358887647808243
sim_compute_sim_state_min0.010358887647808243
sim_render-ego0_max0.004164347739938296
sim_render-ego0_mean0.004164347739938296
sim_render-ego0_median0.004164347739938296
sim_render-ego0_min0.004164347739938296
simulation-passed1
step_physics_max0.1404417317475407
step_physics_mean0.1404417317475407
step_physics_median0.1404417317475407
step_physics_min0.1404417317475407
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6598014013YU CHENCBC Net v2 test - APR 6 anomaly + mar 28 bcaido-LF-sim-validationsim-2of4successnogpu-production-spot-2-050:10:17
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median12.708048650896863
survival_time_median59.99999999999873
deviation-center-line_median2.5219417229222363
in-drivable-lane_median27.899999999999455


other stats
agent_compute-ego0_max0.09287009926064624
agent_compute-ego0_mean0.09287009926064624
agent_compute-ego0_median0.09287009926064624
agent_compute-ego0_min0.09287009926064624
complete-iteration_max0.27965198805886043
complete-iteration_mean0.27965198805886043
complete-iteration_median0.27965198805886043
complete-iteration_min0.27965198805886043
deviation-center-line_max2.5219417229222363
deviation-center-line_mean2.5219417229222363
deviation-center-line_min2.5219417229222363
deviation-heading_max12.862476835540049
deviation-heading_mean12.862476835540049
deviation-heading_median12.862476835540049
deviation-heading_min12.862476835540049
driven_any_max24.97802073360978
driven_any_mean24.97802073360978
driven_any_median24.97802073360978
driven_any_min24.97802073360978
driven_lanedir_consec_max12.708048650896863
driven_lanedir_consec_mean12.708048650896863
driven_lanedir_consec_min12.708048650896863
driven_lanedir_max12.708048650896863
driven_lanedir_mean12.708048650896863
driven_lanedir_median12.708048650896863
driven_lanedir_min12.708048650896863
get_duckie_state_max2.047501436181112e-06
get_duckie_state_mean2.047501436181112e-06
get_duckie_state_median2.047501436181112e-06
get_duckie_state_min2.047501436181112e-06
get_robot_state_max0.00404139740282451
get_robot_state_mean0.00404139740282451
get_robot_state_median0.00404139740282451
get_robot_state_min0.00404139740282451
get_state_dump_max0.004974844255217109
get_state_dump_mean0.004974844255217109
get_state_dump_median0.004974844255217109
get_state_dump_min0.004974844255217109
get_ui_image_max0.028890102133961343
get_ui_image_mean0.028890102133961343
get_ui_image_median0.028890102133961343
get_ui_image_min0.028890102133961343
in-drivable-lane_max27.899999999999455
in-drivable-lane_mean27.899999999999455
in-drivable-lane_min27.899999999999455
per-episodes
details{"LF-norm-small_loop-000-ego0": {"driven_any": 24.97802073360978, "get_ui_image": 0.028890102133961343, "step_physics": 0.13304160278504537, "survival_time": 59.99999999999873, "driven_lanedir": 12.708048650896863, "get_state_dump": 0.004974844255217109, "get_robot_state": 0.00404139740282451, "sim_render-ego0": 0.004197054758953314, "get_duckie_state": 2.047501436181112e-06, "in-drivable-lane": 27.899999999999455, "deviation-heading": 12.862476835540049, "agent_compute-ego0": 0.09287009926064624, "complete-iteration": 0.27965198805886043, "set_robot_commands": 0.002614692089261858, "deviation-center-line": 2.5219417229222363, "driven_lanedir_consec": 12.708048650896863, "sim_compute_sim_state": 0.006717293784580659, "sim_compute_performance-ego0": 0.0022049581478477817}}
set_robot_commands_max0.002614692089261858
set_robot_commands_mean0.002614692089261858
set_robot_commands_median0.002614692089261858
set_robot_commands_min0.002614692089261858
sim_compute_performance-ego0_max0.0022049581478477817
sim_compute_performance-ego0_mean0.0022049581478477817
sim_compute_performance-ego0_median0.0022049581478477817
sim_compute_performance-ego0_min0.0022049581478477817
sim_compute_sim_state_max0.006717293784580659
sim_compute_sim_state_mean0.006717293784580659
sim_compute_sim_state_median0.006717293784580659
sim_compute_sim_state_min0.006717293784580659
sim_render-ego0_max0.004197054758953314
sim_render-ego0_mean0.004197054758953314
sim_render-ego0_median0.004197054758953314
sim_render-ego0_min0.004197054758953314
simulation-passed1
step_physics_max0.13304160278504537
step_physics_mean0.13304160278504537
step_physics_median0.13304160278504537
step_physics_min0.13304160278504537
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6597013997Frank (Chude) QianΒ πŸ‡¨πŸ‡¦baseline-behavior-cloning New Datasetaido-LF-sim-validationsim-1of4successnogpu-production-spot-2-050:01:28
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.6659138214033232
survival_time_median3.099999999999997
deviation-center-line_median0.051727897086657995
in-drivable-lane_median1.399999999999996


other stats
agent_compute-ego0_max0.06677569283379449
agent_compute-ego0_mean0.06677569283379449
agent_compute-ego0_median0.06677569283379449
agent_compute-ego0_min0.06677569283379449
complete-iteration_max0.21815442282056052
complete-iteration_mean0.21815442282056052
complete-iteration_median0.21815442282056052
complete-iteration_min0.21815442282056052
deviation-center-line_max0.051727897086657995
deviation-center-line_mean0.051727897086657995
deviation-center-line_min0.051727897086657995
deviation-heading_max0.2995587772819732
deviation-heading_mean0.2995587772819732
deviation-heading_median0.2995587772819732
deviation-heading_min0.2995587772819732
driven_any_max1.07125331009109
driven_any_mean1.07125331009109
driven_any_median1.07125331009109
driven_any_min1.07125331009109
driven_lanedir_consec_max0.6659138214033232
driven_lanedir_consec_mean0.6659138214033232
driven_lanedir_consec_min0.6659138214033232
driven_lanedir_max0.6659138214033232
driven_lanedir_mean0.6659138214033232
driven_lanedir_median0.6659138214033232
driven_lanedir_min0.6659138214033232
get_duckie_state_max1.7029898507254463e-06
get_duckie_state_mean1.7029898507254463e-06
get_duckie_state_median1.7029898507254463e-06
get_duckie_state_min1.7029898507254463e-06
get_robot_state_max0.0044088779933868895
get_robot_state_mean0.0044088779933868895
get_robot_state_median0.0044088779933868895
get_robot_state_min0.0044088779933868895
get_state_dump_max0.005574941635131836
get_state_dump_mean0.005574941635131836
get_state_dump_median0.005574941635131836
get_state_dump_min0.005574941635131836
get_ui_image_max0.037068677326989555
get_ui_image_mean0.037068677326989555
get_ui_image_median0.037068677326989555
get_ui_image_min0.037068677326989555
in-drivable-lane_max1.399999999999996
in-drivable-lane_mean1.399999999999996
in-drivable-lane_min1.399999999999996
per-episodes
details{"LF-norm-techtrack-000-ego0": {"driven_any": 1.07125331009109, "get_ui_image": 0.037068677326989555, "step_physics": 0.08593848016526964, "survival_time": 3.099999999999997, "driven_lanedir": 0.6659138214033232, "get_state_dump": 0.005574941635131836, "get_robot_state": 0.0044088779933868895, "sim_render-ego0": 0.004549155159602089, "get_duckie_state": 1.7029898507254463e-06, "in-drivable-lane": 1.399999999999996, "deviation-heading": 0.2995587772819732, "agent_compute-ego0": 0.06677569283379449, "complete-iteration": 0.21815442282056052, "set_robot_commands": 0.00265027227855864, "deviation-center-line": 0.051727897086657995, "driven_lanedir_consec": 0.6659138214033232, "sim_compute_sim_state": 0.00875998300219339, "sim_compute_performance-ego0": 0.002329716606745644}}
set_robot_commands_max0.00265027227855864
set_robot_commands_mean0.00265027227855864
set_robot_commands_median0.00265027227855864
set_robot_commands_min0.00265027227855864
sim_compute_performance-ego0_max0.002329716606745644
sim_compute_performance-ego0_mean0.002329716606745644
sim_compute_performance-ego0_median0.002329716606745644
sim_compute_performance-ego0_min0.002329716606745644
sim_compute_sim_state_max0.00875998300219339
sim_compute_sim_state_mean0.00875998300219339
sim_compute_sim_state_median0.00875998300219339
sim_compute_sim_state_min0.00875998300219339
sim_render-ego0_max0.004549155159602089
sim_render-ego0_mean0.004549155159602089
sim_render-ego0_median0.004549155159602089
sim_render-ego0_min0.004549155159602089
simulation-passed1
step_physics_max0.08593848016526964
step_physics_mean0.08593848016526964
step_physics_median0.08593848016526964
step_physics_min0.08593848016526964
survival_time_max3.099999999999997
survival_time_mean3.099999999999997
survival_time_min3.099999999999997
No reset possible
6595314033YU CHENCBC V2, mar28_apr6 bc, mar31_apr6 anomaly aido-LF-sim-validationsim-2of4successnogpu-production-spot-2-050:02:13
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median1.2233179850112728
survival_time_median8.999999999999993
deviation-center-line_median0.30526546086810546
in-drivable-lane_median4.350000000000001


other stats
agent_compute-ego0_max0.08710306784066048
agent_compute-ego0_mean0.08710306784066048
agent_compute-ego0_median0.08710306784066048
agent_compute-ego0_min0.08710306784066048
complete-iteration_max0.24335733961663852
complete-iteration_mean0.24335733961663852
complete-iteration_median0.24335733961663852
complete-iteration_min0.24335733961663852
deviation-center-line_max0.30526546086810546
deviation-center-line_mean0.30526546086810546
deviation-center-line_min0.30526546086810546
deviation-heading_max1.9375636610902296
deviation-heading_mean1.9375636610902296
deviation-heading_median1.9375636610902296
deviation-heading_min1.9375636610902296
driven_any_max2.5195313788512235
driven_any_mean2.5195313788512235
driven_any_median2.5195313788512235
driven_any_min2.5195313788512235
driven_lanedir_consec_max1.2233179850112728
driven_lanedir_consec_mean1.2233179850112728
driven_lanedir_consec_min1.2233179850112728
driven_lanedir_max1.2233179850112728
driven_lanedir_mean1.2233179850112728
driven_lanedir_median1.2233179850112728
driven_lanedir_min1.2233179850112728
get_duckie_state_max1.2553199220098844e-06
get_duckie_state_mean1.2553199220098844e-06
get_duckie_state_median1.2553199220098844e-06
get_duckie_state_min1.2553199220098844e-06
get_robot_state_max0.003741481686165319
get_robot_state_mean0.003741481686165319
get_robot_state_median0.003741481686165319
get_robot_state_min0.003741481686165319
get_state_dump_max0.004580667664332943
get_state_dump_mean0.004580667664332943
get_state_dump_median0.004580667664332943
get_state_dump_min0.004580667664332943
get_ui_image_max0.02664798267638486
get_ui_image_mean0.02664798267638486
get_ui_image_median0.02664798267638486
get_ui_image_min0.02664798267638486
in-drivable-lane_max4.350000000000001
in-drivable-lane_mean4.350000000000001
in-drivable-lane_min4.350000000000001
per-episodes
details{"LF-norm-small_loop-000-ego0": {"driven_any": 2.5195313788512235, "get_ui_image": 0.02664798267638486, "step_physics": 0.10703892997615246, "survival_time": 8.999999999999993, "driven_lanedir": 1.2233179850112728, "get_state_dump": 0.004580667664332943, "get_robot_state": 0.003741481686165319, "sim_render-ego0": 0.004027552367573944, "get_duckie_state": 1.2553199220098844e-06, "in-drivable-lane": 4.350000000000001, "deviation-heading": 1.9375636610902296, "agent_compute-ego0": 0.08710306784066048, "complete-iteration": 0.24335733961663852, "set_robot_commands": 0.0024723861757562963, "deviation-center-line": 0.30526546086810546, "driven_lanedir_consec": 1.2233179850112728, "sim_compute_sim_state": 0.005568987756802891, "sim_compute_performance-ego0": 0.00208671053470169}}
set_robot_commands_max0.0024723861757562963
set_robot_commands_mean0.0024723861757562963
set_robot_commands_median0.0024723861757562963
set_robot_commands_min0.0024723861757562963
sim_compute_performance-ego0_max0.00208671053470169
sim_compute_performance-ego0_mean0.00208671053470169
sim_compute_performance-ego0_median0.00208671053470169
sim_compute_performance-ego0_min0.00208671053470169
sim_compute_sim_state_max0.005568987756802891
sim_compute_sim_state_mean0.005568987756802891
sim_compute_sim_state_median0.005568987756802891
sim_compute_sim_state_min0.005568987756802891
sim_render-ego0_max0.004027552367573944
sim_render-ego0_mean0.004027552367573944
sim_render-ego0_median0.004027552367573944
sim_render-ego0_min0.004027552367573944
simulation-passed1
step_physics_max0.10703892997615246
step_physics_mean0.10703892997615246
step_physics_median0.10703892997615246
step_physics_min0.10703892997615246
survival_time_max8.999999999999993
survival_time_mean8.999999999999993
survival_time_min8.999999999999993
No reset possible
6588313579Andras Beres202-1aido-LF-sim-testingsim-0of4successnogpu-production-spot-2-050:12:35
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median30.384466630989007
survival_time_median59.99999999999873
deviation-center-line_median3.9389847694537696
in-drivable-lane_median0.5999999999999908


other stats
agent_compute-ego0_max0.022443203008939185
agent_compute-ego0_mean0.022443203008939185
agent_compute-ego0_median0.022443203008939185
agent_compute-ego0_min0.022443203008939185
complete-iteration_max0.22078942915085056
complete-iteration_mean0.22078942915085056
complete-iteration_median0.22078942915085056
complete-iteration_min0.22078942915085056
deviation-center-line_max3.9389847694537696
deviation-center-line_mean3.9389847694537696
deviation-center-line_min3.9389847694537696
deviation-heading_max7.308711189382826
deviation-heading_mean7.308711189382826
deviation-heading_median7.308711189382826
deviation-heading_min7.308711189382826
driven_any_max30.996324785684447
driven_any_mean30.996324785684447
driven_any_median30.996324785684447
driven_any_min30.996324785684447
driven_lanedir_consec_max30.384466630989007
driven_lanedir_consec_mean30.384466630989007
driven_lanedir_consec_min30.384466630989007
driven_lanedir_max30.384466630989007
driven_lanedir_mean30.384466630989007
driven_lanedir_median30.384466630989007
driven_lanedir_min30.384466630989007
get_duckie_state_max1.8116635744220312e-06
get_duckie_state_mean1.8116635744220312e-06
get_duckie_state_median1.8116635744220312e-06
get_duckie_state_min1.8116635744220312e-06
get_robot_state_max0.004242453348825218
get_robot_state_mean0.004242453348825218
get_robot_state_median0.004242453348825218
get_robot_state_min0.004242453348825218
get_state_dump_max0.005187209103129289
get_state_dump_mean0.005187209103129289
get_state_dump_median0.005187209103129289
get_state_dump_min0.005187209103129289
get_ui_image_max0.03218999671300782
get_ui_image_mean0.03218999671300782
get_ui_image_median0.03218999671300782
get_ui_image_min0.03218999671300782
in-drivable-lane_max0.5999999999999908
in-drivable-lane_mean0.5999999999999908
in-drivable-lane_min0.5999999999999908
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 30.996324785684447, "get_ui_image": 0.03218999671300782, "step_physics": 0.1370486687065461, "survival_time": 59.99999999999873, "driven_lanedir": 30.384466630989007, "get_state_dump": 0.005187209103129289, "get_robot_state": 0.004242453348825218, "sim_render-ego0": 0.004300892104911963, "get_duckie_state": 1.8116635744220312e-06, "in-drivable-lane": 0.5999999999999908, "deviation-heading": 7.308711189382826, "agent_compute-ego0": 0.022443203008939185, "complete-iteration": 0.22078942915085056, "set_robot_commands": 0.002676006955568439, "deviation-center-line": 3.9389847694537696, "driven_lanedir_consec": 30.384466630989007, "sim_compute_sim_state": 0.010304675709695043, "sim_compute_performance-ego0": 0.002304706049402191}}
set_robot_commands_max0.002676006955568439
set_robot_commands_mean0.002676006955568439
set_robot_commands_median0.002676006955568439
set_robot_commands_min0.002676006955568439
sim_compute_performance-ego0_max0.002304706049402191
sim_compute_performance-ego0_mean0.002304706049402191
sim_compute_performance-ego0_median0.002304706049402191
sim_compute_performance-ego0_min0.002304706049402191
sim_compute_sim_state_max0.010304675709695043
sim_compute_sim_state_mean0.010304675709695043
sim_compute_sim_state_median0.010304675709695043
sim_compute_sim_state_min0.010304675709695043
sim_render-ego0_max0.004300892104911963
sim_render-ego0_mean0.004300892104911963
sim_render-ego0_median0.004300892104911963
sim_render-ego0_min0.004300892104911963
simulation-passed1
step_physics_max0.1370486687065461
step_physics_mean0.1370486687065461
step_physics_median0.1370486687065461
step_physics_min0.1370486687065461
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible