Duckietown Challenges Home Challenges Submissions

Evaluator 5109

ID5109
evaluatorgpu-production-spot-2-02
ownerI don't have one πŸ˜€
machinegpu-prod_445a13f6449b
processgpu-production-spot-2-02_445a13f6449b
version6.2.7
first heard
last heard
statusinactive
# evaluating
# success17 65893
# timeout
# failed2 66097
# error
# aborted1 66324
# host-error2 66157
arm0
x86_641
Mac0
gpu available1
Number of processors64
Processor frequency (MHz)0.0 GHz
Free % of processors96%
RAM total (MB)249.0 GB
RAM free (MB)237.8 GB
Disk (MB)969.3 GB
Disk available (MB)866.7 GB
Docker Hub
P11
P2
Cloud simulations1
PI Camera0
# Duckiebots0
Map 3x3 avaiable
Number of duckies
gpu cores
AIDO 2 Map LF public
AIDO 2 Map LF private
AIDO 2 Map LFV public
AIDO 2 Map LFV private
AIDO 2 Map LFVI public
AIDO 2 Map LFVI private
AIDO 3 Map LF public
AIDO 3 Map LF private
AIDO 3 Map LFV public
AIDO 3 Map LFV private
AIDO 3 Map LFVI public
AIDO 3 Map LFVI private
AIDO 5 Map large loop
ETU track
for 2021, map is ETH_small_inter
IPFS mountpoint /ipfs available
IPNS mountpoint /ipns available

Evaluator jobs

Job IDsubmissionuseruser labelchallengestepstatusup to dateevaluatordate starteddate completeddurationmessage
6632413798Nicholas Kostelniktemplate-randomaido-hello-sim-validation370abortednogpu-production-spot-2-020:00:46
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 261, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/create?tag=sha256%3Ab13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691&fromImage=docker.io%2Fnitaigao%2Faido-submissions

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 65, in docker_pull
    pulling = client.api.pull(repository=repository, tag=br.tag, stream=True, decode=True)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/image.py", line 415, in pull
    self._raise_for_status(response)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 263, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/local/lib/python3.8/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.ImageNotFound: 404 Client Error: Not Found ("pull access denied for nitaigao/aido-submissions, repository does not exist or may require 'docker login': denied: requested access to the resource is denied")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 38, in docker_pull_retry
    return docker_pull(client, image_name, quiet=quiet)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 84, in docker_pull
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: Cannot pull repo  docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691  tag  None

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 745, in get_cr
    cr = run_single(
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 944, in run_single
    docker_pull_retry(client, image, ntimes=4, wait=5)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 42, in docker_pull_retry
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: After trying 4 I still could not pull docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6631613910YU CHENCBC Net v2 - testaido-LFP-sim-validationsim-1of4successnogpu-production-spot-2-020:01:48
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median6.399999999999985
in-drivable-lane_median3.1499999999999897
driven_lanedir_consec_median1.2141289844076188
deviation-center-line_median0.219261114044803


other stats
agent_compute-ego0_max0.09666490185168362
agent_compute-ego0_mean0.09666490185168362
agent_compute-ego0_median0.09666490185168362
agent_compute-ego0_min0.09666490185168362
complete-iteration_max0.2785905460978663
complete-iteration_mean0.2785905460978663
complete-iteration_median0.2785905460978663
complete-iteration_min0.2785905460978663
deviation-center-line_max0.219261114044803
deviation-center-line_mean0.219261114044803
deviation-center-line_min0.219261114044803
deviation-heading_max1.3063338801717024
deviation-heading_mean1.3063338801717024
deviation-heading_median1.3063338801717024
deviation-heading_min1.3063338801717024
driven_any_max2.053104161009727
driven_any_mean2.053104161009727
driven_any_median2.053104161009727
driven_any_min2.053104161009727
driven_lanedir_consec_max1.2141289844076188
driven_lanedir_consec_mean1.2141289844076188
driven_lanedir_consec_min1.2141289844076188
driven_lanedir_max1.2141289844076188
driven_lanedir_mean1.2141289844076188
driven_lanedir_median1.2141289844076188
driven_lanedir_min1.2141289844076188
get_duckie_state_max0.0044998512711635855
get_duckie_state_mean0.0044998512711635855
get_duckie_state_median0.0044998512711635855
get_duckie_state_min0.0044998512711635855
get_robot_state_max0.003887424173281174
get_robot_state_mean0.003887424173281174
get_robot_state_median0.003887424173281174
get_robot_state_min0.003887424173281174
get_state_dump_max0.005749883577805157
get_state_dump_mean0.005749883577805157
get_state_dump_median0.005749883577805157
get_state_dump_min0.005749883577805157
get_ui_image_max0.02799563629682674
get_ui_image_mean0.02799563629682674
get_ui_image_median0.02799563629682674
get_ui_image_min0.02799563629682674
in-drivable-lane_max3.1499999999999897
in-drivable-lane_mean3.1499999999999897
in-drivable-lane_min3.1499999999999897
per-episodes
details{"LFP-norm-small_loop-000-ego0": {"driven_any": 2.053104161009727, "get_ui_image": 0.02799563629682674, "step_physics": 0.1248599558837654, "survival_time": 6.399999999999985, "driven_lanedir": 1.2141289844076188, "get_state_dump": 0.005749883577805157, "get_robot_state": 0.003887424173281174, "sim_render-ego0": 0.004019149514131768, "get_duckie_state": 0.0044998512711635855, "in-drivable-lane": 3.1499999999999897, "deviation-heading": 1.3063338801717024, "agent_compute-ego0": 0.09666490185168362, "complete-iteration": 0.2785905460978663, "set_robot_commands": 0.0024804773256760235, "deviation-center-line": 0.219261114044803, "driven_lanedir_consec": 1.2141289844076188, "sim_compute_sim_state": 0.006246559379636779, "sim_compute_performance-ego0": 0.0020884617354518685}}
set_robot_commands_max0.0024804773256760235
set_robot_commands_mean0.0024804773256760235
set_robot_commands_median0.0024804773256760235
set_robot_commands_min0.0024804773256760235
sim_compute_performance-ego0_max0.0020884617354518685
sim_compute_performance-ego0_mean0.0020884617354518685
sim_compute_performance-ego0_median0.0020884617354518685
sim_compute_performance-ego0_min0.0020884617354518685
sim_compute_sim_state_max0.006246559379636779
sim_compute_sim_state_mean0.006246559379636779
sim_compute_sim_state_median0.006246559379636779
sim_compute_sim_state_min0.006246559379636779
sim_render-ego0_max0.004019149514131768
sim_render-ego0_mean0.004019149514131768
sim_render-ego0_median0.004019149514131768
sim_render-ego0_min0.004019149514131768
simulation-passed1
step_physics_max0.1248599558837654
step_physics_mean0.1248599558837654
step_physics_median0.1248599558837654
step_physics_min0.1248599558837654
survival_time_max6.399999999999985
survival_time_mean6.399999999999985
survival_time_min6.399999999999985
No reset possible
6630913912YU CHENCBC Net v2 - testaido-LFP-sim-validationsim-0of4successnogpu-production-spot-2-020:01:17
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median2.4499999999999993
in-drivable-lane_median0.40000000000000024
driven_lanedir_consec_median0.3597090507981662
deviation-center-line_median0.19115951570841871


other stats
agent_compute-ego0_max0.0995222282409668
agent_compute-ego0_mean0.0995222282409668
agent_compute-ego0_median0.0995222282409668
agent_compute-ego0_min0.0995222282409668
complete-iteration_max0.3602943992614746
complete-iteration_mean0.3602943992614746
complete-iteration_median0.3602943992614746
complete-iteration_min0.3602943992614746
deviation-center-line_max0.19115951570841871
deviation-center-line_mean0.19115951570841871
deviation-center-line_min0.19115951570841871
deviation-heading_max0.7788934352935651
deviation-heading_mean0.7788934352935651
deviation-heading_median0.7788934352935651
deviation-heading_min0.7788934352935651
driven_any_max0.5410051658366497
driven_any_mean0.5410051658366497
driven_any_median0.5410051658366497
driven_any_min0.5410051658366497
driven_lanedir_consec_max0.3597090507981662
driven_lanedir_consec_mean0.3597090507981662
driven_lanedir_consec_min0.3597090507981662
driven_lanedir_max0.3597090507981662
driven_lanedir_mean0.3597090507981662
driven_lanedir_median0.3597090507981662
driven_lanedir_min0.3597090507981662
get_duckie_state_max0.02304306983947754
get_duckie_state_mean0.02304306983947754
get_duckie_state_median0.02304306983947754
get_duckie_state_min0.02304306983947754
get_robot_state_max0.004253664016723633
get_robot_state_mean0.004253664016723633
get_robot_state_median0.004253664016723633
get_robot_state_min0.004253664016723633
get_state_dump_max0.009109692573547363
get_state_dump_mean0.009109692573547363
get_state_dump_median0.009109692573547363
get_state_dump_min0.009109692573547363
get_ui_image_max0.042979159355163575
get_ui_image_mean0.042979159355163575
get_ui_image_median0.042979159355163575
get_ui_image_min0.042979159355163575
in-drivable-lane_max0.40000000000000024
in-drivable-lane_mean0.40000000000000024
in-drivable-lane_min0.40000000000000024
per-episodes
details{"LFP-norm-zigzag-000-ego0": {"driven_any": 0.5410051658366497, "get_ui_image": 0.042979159355163575, "step_physics": 0.16181224346160888, "survival_time": 2.4499999999999993, "driven_lanedir": 0.3597090507981662, "get_state_dump": 0.009109692573547363, "get_robot_state": 0.004253664016723633, "sim_render-ego0": 0.004188389778137207, "get_duckie_state": 0.02304306983947754, "in-drivable-lane": 0.40000000000000024, "deviation-heading": 0.7788934352935651, "agent_compute-ego0": 0.0995222282409668, "complete-iteration": 0.3602943992614746, "set_robot_commands": 0.0027077579498291015, "deviation-center-line": 0.19115951570841871, "driven_lanedir_consec": 0.3597090507981662, "sim_compute_sim_state": 0.010296549797058106, "sim_compute_performance-ego0": 0.002258067131042481}}
set_robot_commands_max0.0027077579498291015
set_robot_commands_mean0.0027077579498291015
set_robot_commands_median0.0027077579498291015
set_robot_commands_min0.0027077579498291015
sim_compute_performance-ego0_max0.002258067131042481
sim_compute_performance-ego0_mean0.002258067131042481
sim_compute_performance-ego0_median0.002258067131042481
sim_compute_performance-ego0_min0.002258067131042481
sim_compute_sim_state_max0.010296549797058106
sim_compute_sim_state_mean0.010296549797058106
sim_compute_sim_state_median0.010296549797058106
sim_compute_sim_state_min0.010296549797058106
sim_render-ego0_max0.004188389778137207
sim_render-ego0_mean0.004188389778137207
sim_render-ego0_median0.004188389778137207
sim_render-ego0_min0.004188389778137207
simulation-passed1
step_physics_max0.16181224346160888
step_physics_mean0.16181224346160888
step_physics_median0.16181224346160888
step_physics_min0.16181224346160888
survival_time_max2.4499999999999993
survival_time_mean2.4499999999999993
survival_time_min2.4499999999999993
No reset possible
6630613939YU CHENCBC Net v2 test - added mar 31 datasetaido-LFP-sim-validationsim-0of4successnogpu-production-spot-2-020:01:32
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median2.499999999999999
in-drivable-lane_median0.0
driven_lanedir_consec_median0.4989889488340616
deviation-center-line_median0.17446425597543777


other stats
agent_compute-ego0_max0.10764143981185614
agent_compute-ego0_mean0.10764143981185614
agent_compute-ego0_median0.10764143981185614
agent_compute-ego0_min0.10764143981185614
complete-iteration_max0.36647486686706543
complete-iteration_mean0.36647486686706543
complete-iteration_median0.36647486686706543
complete-iteration_min0.36647486686706543
deviation-center-line_max0.17446425597543777
deviation-center-line_mean0.17446425597543777
deviation-center-line_min0.17446425597543777
deviation-heading_max0.9500626780069856
deviation-heading_mean0.9500626780069856
deviation-heading_median0.9500626780069856
deviation-heading_min0.9500626780069856
driven_any_max0.5288161073485546
driven_any_mean0.5288161073485546
driven_any_median0.5288161073485546
driven_any_min0.5288161073485546
driven_lanedir_consec_max0.4989889488340616
driven_lanedir_consec_mean0.4989889488340616
driven_lanedir_consec_min0.4989889488340616
driven_lanedir_max0.4989889488340616
driven_lanedir_mean0.4989889488340616
driven_lanedir_median0.4989889488340616
driven_lanedir_min0.4989889488340616
get_duckie_state_max0.024298625833847943
get_duckie_state_mean0.024298625833847943
get_duckie_state_median0.024298625833847943
get_duckie_state_min0.024298625833847943
get_robot_state_max0.004412711835375019
get_robot_state_mean0.004412711835375019
get_robot_state_median0.004412711835375019
get_robot_state_min0.004412711835375019
get_state_dump_max0.00948163574817134
get_state_dump_mean0.00948163574817134
get_state_dump_median0.00948163574817134
get_state_dump_min0.00948163574817134
get_ui_image_max0.04334951849544749
get_ui_image_mean0.04334951849544749
get_ui_image_median0.04334951849544749
get_ui_image_min0.04334951849544749
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFP-norm-zigzag-000-ego0": {"driven_any": 0.5288161073485546, "get_ui_image": 0.04334951849544749, "step_physics": 0.1564185338861802, "survival_time": 2.499999999999999, "driven_lanedir": 0.4989889488340616, "get_state_dump": 0.00948163574817134, "get_robot_state": 0.004412711835375019, "sim_render-ego0": 0.004428447461595722, "get_duckie_state": 0.024298625833847943, "in-drivable-lane": 0.0, "deviation-heading": 0.9500626780069856, "agent_compute-ego0": 0.10764143981185614, "complete-iteration": 0.36647486686706543, "set_robot_commands": 0.0027354558308919272, "deviation-center-line": 0.17446425597543777, "driven_lanedir_consec": 0.4989889488340616, "sim_compute_sim_state": 0.011223746281044156, "sim_compute_performance-ego0": 0.0023738776936250575}}
set_robot_commands_max0.0027354558308919272
set_robot_commands_mean0.0027354558308919272
set_robot_commands_median0.0027354558308919272
set_robot_commands_min0.0027354558308919272
sim_compute_performance-ego0_max0.0023738776936250575
sim_compute_performance-ego0_mean0.0023738776936250575
sim_compute_performance-ego0_median0.0023738776936250575
sim_compute_performance-ego0_min0.0023738776936250575
sim_compute_sim_state_max0.011223746281044156
sim_compute_sim_state_mean0.011223746281044156
sim_compute_sim_state_median0.011223746281044156
sim_compute_sim_state_min0.011223746281044156
sim_render-ego0_max0.004428447461595722
sim_render-ego0_mean0.004428447461595722
sim_render-ego0_median0.004428447461595722
sim_render-ego0_min0.004428447461595722
simulation-passed1
step_physics_max0.1564185338861802
step_physics_mean0.1564185338861802
step_physics_median0.1564185338861802
step_physics_min0.1564185338861802
survival_time_max2.499999999999999
survival_time_mean2.499999999999999
survival_time_min2.499999999999999
No reset possible
6624713697Samuel Alexandertemplate-pytorchaido-LF-sim-validationsim-0of4successnogpu-production-spot-2-020:12:11
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median1.3661650553178517
survival_time_median59.99999999999873
deviation-center-line_median1.8824454173958731
in-drivable-lane_median29.899999999999338


other stats
agent_compute-ego0_max0.015882722146306604
agent_compute-ego0_mean0.015882722146306604
agent_compute-ego0_median0.015882722146306604
agent_compute-ego0_min0.015882722146306604
complete-iteration_max0.2834309288901552
complete-iteration_mean0.2834309288901552
complete-iteration_median0.2834309288901552
complete-iteration_min0.2834309288901552
deviation-center-line_max1.8824454173958731
deviation-center-line_mean1.8824454173958731
deviation-center-line_min1.8824454173958731
deviation-heading_max23.41905505871295
deviation-heading_mean23.41905505871295
deviation-heading_median23.41905505871295
deviation-heading_min23.41905505871295
driven_any_max4.49866995703628
driven_any_mean4.49866995703628
driven_any_median4.49866995703628
driven_any_min4.49866995703628
driven_lanedir_consec_max1.3661650553178517
driven_lanedir_consec_mean1.3661650553178517
driven_lanedir_consec_min1.3661650553178517
driven_lanedir_max1.3661650553178517
driven_lanedir_mean1.3661650553178517
driven_lanedir_median1.3661650553178517
driven_lanedir_min1.3661650553178517
get_duckie_state_max1.3927932980654143e-06
get_duckie_state_mean1.3927932980654143e-06
get_duckie_state_median1.3927932980654143e-06
get_duckie_state_min1.3927932980654143e-06
get_robot_state_max0.0040899105214953525
get_robot_state_mean0.0040899105214953525
get_robot_state_median0.0040899105214953525
get_robot_state_min0.0040899105214953525
get_state_dump_max0.005196156251638954
get_state_dump_mean0.005196156251638954
get_state_dump_median0.005196156251638954
get_state_dump_min0.005196156251638954
get_ui_image_max0.0312940415295832
get_ui_image_mean0.0312940415295832
get_ui_image_median0.0312940415295832
get_ui_image_min0.0312940415295832
in-drivable-lane_max29.899999999999338
in-drivable-lane_mean29.899999999999338
in-drivable-lane_min29.899999999999338
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 4.49866995703628, "get_ui_image": 0.0312940415295832, "step_physics": 0.20882612382442528, "survival_time": 59.99999999999873, "driven_lanedir": 1.3661650553178517, "get_state_dump": 0.005196156251638954, "get_robot_state": 0.0040899105214953525, "sim_render-ego0": 0.004120134493393465, "get_duckie_state": 1.3927932980654143e-06, "in-drivable-lane": 29.899999999999338, "deviation-heading": 23.41905505871295, "agent_compute-ego0": 0.015882722146306604, "complete-iteration": 0.2834309288901552, "set_robot_commands": 0.0025063204229324684, "deviation-center-line": 1.8824454173958731, "driven_lanedir_consec": 1.3661650553178517, "sim_compute_sim_state": 0.00923175021671038, "sim_compute_performance-ego0": 0.0021961737830474117}}
set_robot_commands_max0.0025063204229324684
set_robot_commands_mean0.0025063204229324684
set_robot_commands_median0.0025063204229324684
set_robot_commands_min0.0025063204229324684
sim_compute_performance-ego0_max0.0021961737830474117
sim_compute_performance-ego0_mean0.0021961737830474117
sim_compute_performance-ego0_median0.0021961737830474117
sim_compute_performance-ego0_min0.0021961737830474117
sim_compute_sim_state_max0.00923175021671038
sim_compute_sim_state_mean0.00923175021671038
sim_compute_sim_state_median0.00923175021671038
sim_compute_sim_state_min0.00923175021671038
sim_render-ego0_max0.004120134493393465
sim_render-ego0_mean0.004120134493393465
sim_render-ego0_median0.004120134493393465
sim_render-ego0_min0.004120134493393465
simulation-passed1
step_physics_max0.20882612382442528
step_physics_mean0.20882612382442528
step_physics_median0.20882612382442528
step_physics_min0.20882612382442528
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6622013911YU CHENCBC Net v2 - testaido-LF-sim-validationsim-1of4successnogpu-production-spot-2-020:11:45
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median17.728266543708195
survival_time_median59.99999999999873
deviation-center-line_median3.4557356757261393
in-drivable-lane_median13.34999999999974


other stats
agent_compute-ego0_max0.0926542333718839
agent_compute-ego0_mean0.0926542333718839
agent_compute-ego0_median0.0926542333718839
agent_compute-ego0_min0.0926542333718839
complete-iteration_max0.33084198378404905
complete-iteration_mean0.33084198378404905
complete-iteration_median0.33084198378404905
complete-iteration_min0.33084198378404905
deviation-center-line_max3.4557356757261393
deviation-center-line_mean3.4557356757261393
deviation-center-line_min3.4557356757261393
deviation-heading_max14.326606252560628
deviation-heading_mean14.326606252560628
deviation-heading_median14.326606252560628
deviation-heading_min14.326606252560628
driven_any_max23.75818909809252
driven_any_mean23.75818909809252
driven_any_median23.75818909809252
driven_any_min23.75818909809252
driven_lanedir_consec_max17.728266543708195
driven_lanedir_consec_mean17.728266543708195
driven_lanedir_consec_min17.728266543708195
driven_lanedir_max17.728266543708195
driven_lanedir_mean17.728266543708195
driven_lanedir_median17.728266543708195
driven_lanedir_min17.728266543708195
get_duckie_state_max1.3963665990011578e-06
get_duckie_state_mean1.3963665990011578e-06
get_duckie_state_median1.3963665990011578e-06
get_duckie_state_min1.3963665990011578e-06
get_robot_state_max0.003924156803572605
get_robot_state_mean0.003924156803572605
get_robot_state_median0.003924156803572605
get_robot_state_min0.003924156803572605
get_state_dump_max0.004852405694204008
get_state_dump_mean0.004852405694204008
get_state_dump_median0.004852405694204008
get_state_dump_min0.004852405694204008
get_ui_image_max0.03471232969298351
get_ui_image_mean0.03471232969298351
get_ui_image_median0.03471232969298351
get_ui_image_min0.03471232969298351
in-drivable-lane_max13.34999999999974
in-drivable-lane_mean13.34999999999974
in-drivable-lane_min13.34999999999974
per-episodes
details{"LF-norm-techtrack-000-ego0": {"driven_any": 23.75818909809252, "get_ui_image": 0.03471232969298351, "step_physics": 0.1726468389576222, "survival_time": 59.99999999999873, "driven_lanedir": 17.728266543708195, "get_state_dump": 0.004852405694204008, "get_robot_state": 0.003924156803572605, "sim_render-ego0": 0.0040650770725755275, "get_duckie_state": 1.3963665990011578e-06, "in-drivable-lane": 13.34999999999974, "deviation-heading": 14.326606252560628, "agent_compute-ego0": 0.0926542333718839, "complete-iteration": 0.33084198378404905, "set_robot_commands": 0.0024788943456670424, "deviation-center-line": 3.4557356757261393, "driven_lanedir_consec": 17.728266543708195, "sim_compute_sim_state": 0.013228101992388748, "sim_compute_performance-ego0": 0.0021836400329818535}}
set_robot_commands_max0.0024788943456670424
set_robot_commands_mean0.0024788943456670424
set_robot_commands_median0.0024788943456670424
set_robot_commands_min0.0024788943456670424
sim_compute_performance-ego0_max0.0021836400329818535
sim_compute_performance-ego0_mean0.0021836400329818535
sim_compute_performance-ego0_median0.0021836400329818535
sim_compute_performance-ego0_min0.0021836400329818535
sim_compute_sim_state_max0.013228101992388748
sim_compute_sim_state_mean0.013228101992388748
sim_compute_sim_state_median0.013228101992388748
sim_compute_sim_state_min0.013228101992388748
sim_render-ego0_max0.0040650770725755275
sim_render-ego0_mean0.0040650770725755275
sim_render-ego0_median0.0040650770725755275
sim_render-ego0_min0.0040650770725755275
simulation-passed1
step_physics_max0.1726468389576222
step_physics_mean0.1726468389576222
step_physics_median0.1726468389576222
step_physics_min0.1726468389576222
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6620913998Frank (Chude) QianΒ πŸ‡¨πŸ‡¦baseline-behavior-cloning New Datasetaido-LFP-sim-validationsim-1of4successnogpu-production-spot-2-020:01:19
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median3.1999999999999966
in-drivable-lane_median1.7999999999999958
driven_lanedir_consec_median0.3821437184723568
deviation-center-line_median0.11961583025174335


other stats
agent_compute-ego0_max0.06459978543795072
agent_compute-ego0_mean0.06459978543795072
agent_compute-ego0_median0.06459978543795072
agent_compute-ego0_min0.06459978543795072
complete-iteration_max0.1931049163524921
complete-iteration_mean0.1931049163524921
complete-iteration_median0.1931049163524921
complete-iteration_min0.1931049163524921
deviation-center-line_max0.11961583025174335
deviation-center-line_mean0.11961583025174335
deviation-center-line_min0.11961583025174335
deviation-heading_max0.8505676445242846
deviation-heading_mean0.8505676445242846
deviation-heading_median0.8505676445242846
deviation-heading_min0.8505676445242846
driven_any_max1.1232554004757236
driven_any_mean1.1232554004757236
driven_any_median1.1232554004757236
driven_any_min1.1232554004757236
driven_lanedir_consec_max0.3821437184723568
driven_lanedir_consec_mean0.3821437184723568
driven_lanedir_consec_min0.3821437184723568
driven_lanedir_max0.3821437184723568
driven_lanedir_mean0.3821437184723568
driven_lanedir_median0.3821437184723568
driven_lanedir_min0.3821437184723568
get_duckie_state_max0.004767120801485502
get_duckie_state_mean0.004767120801485502
get_duckie_state_median0.004767120801485502
get_duckie_state_min0.004767120801485502
get_robot_state_max0.004280189367441031
get_robot_state_mean0.004280189367441031
get_robot_state_median0.004280189367441031
get_robot_state_min0.004280189367441031
get_state_dump_max0.006473511915940505
get_state_dump_mean0.006473511915940505
get_state_dump_median0.006473511915940505
get_state_dump_min0.006473511915940505
get_ui_image_max0.028587480691763072
get_ui_image_mean0.028587480691763072
get_ui_image_median0.028587480691763072
get_ui_image_min0.028587480691763072
in-drivable-lane_max1.7999999999999958
in-drivable-lane_mean1.7999999999999958
in-drivable-lane_min1.7999999999999958
per-episodes
details{"LFP-norm-small_loop-000-ego0": {"driven_any": 1.1232554004757236, "get_ui_image": 0.028587480691763072, "step_physics": 0.06895013222327599, "survival_time": 3.1999999999999966, "driven_lanedir": 0.3821437184723568, "get_state_dump": 0.006473511915940505, "get_robot_state": 0.004280189367441031, "sim_render-ego0": 0.004230114129873423, "get_duckie_state": 0.004767120801485502, "in-drivable-lane": 1.7999999999999958, "deviation-heading": 0.8505676445242846, "agent_compute-ego0": 0.06459978543795072, "complete-iteration": 0.1931049163524921, "set_robot_commands": 0.0026903776022104116, "deviation-center-line": 0.11961583025174335, "driven_lanedir_consec": 0.3821437184723568, "sim_compute_sim_state": 0.006205338698167067, "sim_compute_performance-ego0": 0.002213012255155123}}
set_robot_commands_max0.0026903776022104116
set_robot_commands_mean0.0026903776022104116
set_robot_commands_median0.0026903776022104116
set_robot_commands_min0.0026903776022104116
sim_compute_performance-ego0_max0.002213012255155123
sim_compute_performance-ego0_mean0.002213012255155123
sim_compute_performance-ego0_median0.002213012255155123
sim_compute_performance-ego0_min0.002213012255155123
sim_compute_sim_state_max0.006205338698167067
sim_compute_sim_state_mean0.006205338698167067
sim_compute_sim_state_median0.006205338698167067
sim_compute_sim_state_min0.006205338698167067
sim_render-ego0_max0.004230114129873423
sim_render-ego0_mean0.004230114129873423
sim_render-ego0_median0.004230114129873423
sim_render-ego0_min0.004230114129873423
simulation-passed1
step_physics_max0.06895013222327599
step_physics_mean0.06895013222327599
step_physics_median0.06895013222327599
step_physics_min0.06895013222327599
survival_time_max3.1999999999999966
survival_time_mean3.1999999999999966
survival_time_min3.1999999999999966
No reset possible
6619513998Frank (Chude) QianΒ πŸ‡¨πŸ‡¦baseline-behavior-cloning New Datasetaido-LFP-sim-validationsim-1of4successnogpu-production-spot-2-020:01:29
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median3.1999999999999966
in-drivable-lane_median1.7999999999999958
driven_lanedir_consec_median0.3821437184723568
deviation-center-line_median0.11961583025174335


other stats
agent_compute-ego0_max0.06475620636573205
agent_compute-ego0_mean0.06475620636573205
agent_compute-ego0_median0.06475620636573205
agent_compute-ego0_min0.06475620636573205
complete-iteration_max0.1902884630056528
complete-iteration_mean0.1902884630056528
complete-iteration_median0.1902884630056528
complete-iteration_min0.1902884630056528
deviation-center-line_max0.11961583025174335
deviation-center-line_mean0.11961583025174335
deviation-center-line_min0.11961583025174335
deviation-heading_max0.8505676445242846
deviation-heading_mean0.8505676445242846
deviation-heading_median0.8505676445242846
deviation-heading_min0.8505676445242846
driven_any_max1.1232554004757236
driven_any_mean1.1232554004757236
driven_any_median1.1232554004757236
driven_any_min1.1232554004757236
driven_lanedir_consec_max0.3821437184723568
driven_lanedir_consec_mean0.3821437184723568
driven_lanedir_consec_min0.3821437184723568
driven_lanedir_max0.3821437184723568
driven_lanedir_mean0.3821437184723568
driven_lanedir_median0.3821437184723568
driven_lanedir_min0.3821437184723568
get_duckie_state_max0.0045789571908804085
get_duckie_state_mean0.0045789571908804085
get_duckie_state_median0.0045789571908804085
get_duckie_state_min0.0045789571908804085
get_robot_state_max0.003949513802161584
get_robot_state_mean0.003949513802161584
get_robot_state_median0.003949513802161584
get_robot_state_min0.003949513802161584
get_state_dump_max0.006091422301072341
get_state_dump_mean0.006091422301072341
get_state_dump_median0.006091422301072341
get_state_dump_min0.006091422301072341
get_ui_image_max0.029118134425236628
get_ui_image_mean0.029118134425236628
get_ui_image_median0.029118134425236628
get_ui_image_min0.029118134425236628
in-drivable-lane_max1.7999999999999958
in-drivable-lane_mean1.7999999999999958
in-drivable-lane_min1.7999999999999958
per-episodes
details{"LFP-norm-small_loop-000-ego0": {"driven_any": 1.1232554004757236, "get_ui_image": 0.029118134425236628, "step_physics": 0.06652406912583571, "survival_time": 3.1999999999999966, "driven_lanedir": 0.3821437184723568, "get_state_dump": 0.006091422301072341, "get_robot_state": 0.003949513802161584, "sim_render-ego0": 0.004178450657771184, "get_duckie_state": 0.0045789571908804085, "in-drivable-lane": 1.7999999999999958, "deviation-heading": 0.8505676445242846, "agent_compute-ego0": 0.06475620636573205, "complete-iteration": 0.1902884630056528, "set_robot_commands": 0.0024885030893179085, "deviation-center-line": 0.11961583025174335, "driven_lanedir_consec": 0.3821437184723568, "sim_compute_sim_state": 0.006303871594942533, "sim_compute_performance-ego0": 0.0021961175478421723}}
set_robot_commands_max0.0024885030893179085
set_robot_commands_mean0.0024885030893179085
set_robot_commands_median0.0024885030893179085
set_robot_commands_min0.0024885030893179085
sim_compute_performance-ego0_max0.0021961175478421723
sim_compute_performance-ego0_mean0.0021961175478421723
sim_compute_performance-ego0_median0.0021961175478421723
sim_compute_performance-ego0_min0.0021961175478421723
sim_compute_sim_state_max0.006303871594942533
sim_compute_sim_state_mean0.006303871594942533
sim_compute_sim_state_median0.006303871594942533
sim_compute_sim_state_min0.006303871594942533
sim_render-ego0_max0.004178450657771184
sim_render-ego0_mean0.004178450657771184
sim_render-ego0_median0.004178450657771184
sim_render-ego0_min0.004178450657771184
simulation-passed1
step_physics_max0.06652406912583571
step_physics_mean0.06652406912583571
step_physics_median0.06652406912583571
step_physics_min0.06652406912583571
survival_time_max3.1999999999999966
survival_time_mean3.1999999999999966
survival_time_min3.1999999999999966
No reset possible
6619013965YU CHENCBC Net v2 test - APR 3 BC TFdata + mar 28 anomalyaido-LFP-sim-validationsim-0of4successnogpu-production-spot-2-020:01:30
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median2.4499999999999993
in-drivable-lane_median0.30000000000000027
driven_lanedir_consec_median0.3909817103508144
deviation-center-line_median0.22752962091616552


other stats
agent_compute-ego0_max0.10489573955535889
agent_compute-ego0_mean0.10489573955535889
agent_compute-ego0_median0.10489573955535889
agent_compute-ego0_min0.10489573955535889
complete-iteration_max0.3592308235168457
complete-iteration_mean0.3592308235168457
complete-iteration_median0.3592308235168457
complete-iteration_min0.3592308235168457
deviation-center-line_max0.22752962091616552
deviation-center-line_mean0.22752962091616552
deviation-center-line_min0.22752962091616552
deviation-heading_max0.6503836986580688
deviation-heading_mean0.6503836986580688
deviation-heading_median0.6503836986580688
deviation-heading_min0.6503836986580688
driven_any_max0.5586696671406095
driven_any_mean0.5586696671406095
driven_any_median0.5586696671406095
driven_any_min0.5586696671406095
driven_lanedir_consec_max0.3909817103508144
driven_lanedir_consec_mean0.3909817103508144
driven_lanedir_consec_min0.3909817103508144
driven_lanedir_max0.3909817103508144
driven_lanedir_mean0.3909817103508144
driven_lanedir_median0.3909817103508144
driven_lanedir_min0.3909817103508144
get_duckie_state_max0.022922115325927736
get_duckie_state_mean0.022922115325927736
get_duckie_state_median0.022922115325927736
get_duckie_state_min0.022922115325927736
get_robot_state_max0.004197621345520019
get_robot_state_mean0.004197621345520019
get_robot_state_median0.004197621345520019
get_robot_state_min0.004197621345520019
get_state_dump_max0.008820972442626952
get_state_dump_mean0.008820972442626952
get_state_dump_median0.008820972442626952
get_state_dump_min0.008820972442626952
get_ui_image_max0.043920068740844725
get_ui_image_mean0.043920068740844725
get_ui_image_median0.043920068740844725
get_ui_image_min0.043920068740844725
in-drivable-lane_max0.30000000000000027
in-drivable-lane_mean0.30000000000000027
in-drivable-lane_min0.30000000000000027
per-episodes
details{"LFP-norm-zigzag-000-ego0": {"driven_any": 0.5586696671406095, "get_ui_image": 0.043920068740844725, "step_physics": 0.15505171298980713, "survival_time": 2.4499999999999993, "driven_lanedir": 0.3909817103508144, "get_state_dump": 0.008820972442626952, "get_robot_state": 0.004197621345520019, "sim_render-ego0": 0.00428166389465332, "get_duckie_state": 0.022922115325927736, "in-drivable-lane": 0.30000000000000027, "deviation-heading": 0.6503836986580688, "agent_compute-ego0": 0.10489573955535889, "complete-iteration": 0.3592308235168457, "set_robot_commands": 0.002714715003967285, "deviation-center-line": 0.22752962091616552, "driven_lanedir_consec": 0.3909817103508144, "sim_compute_sim_state": 0.010027508735656738, "sim_compute_performance-ego0": 0.002287449836730957}}
set_robot_commands_max0.002714715003967285
set_robot_commands_mean0.002714715003967285
set_robot_commands_median0.002714715003967285
set_robot_commands_min0.002714715003967285
sim_compute_performance-ego0_max0.002287449836730957
sim_compute_performance-ego0_mean0.002287449836730957
sim_compute_performance-ego0_median0.002287449836730957
sim_compute_performance-ego0_min0.002287449836730957
sim_compute_sim_state_max0.010027508735656738
sim_compute_sim_state_mean0.010027508735656738
sim_compute_sim_state_median0.010027508735656738
sim_compute_sim_state_min0.010027508735656738
sim_render-ego0_max0.00428166389465332
sim_render-ego0_mean0.00428166389465332
sim_render-ego0_median0.00428166389465332
sim_render-ego0_min0.00428166389465332
simulation-passed1
step_physics_max0.15505171298980713
step_physics_mean0.15505171298980713
step_physics_median0.15505171298980713
step_physics_min0.15505171298980713
survival_time_max2.4499999999999993
survival_time_mean2.4499999999999993
survival_time_min2.4499999999999993
No reset possible
6617613992Frank (Chude) QianΒ πŸ‡¨πŸ‡¦CBC Net - MixTraining - Expert LF Human LFPaido-LFP-sim-validationsim-1of4successnogpu-production-spot-2-020:01:54
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median6.099999999999986
in-drivable-lane_median1.649999999999995
driven_lanedir_consec_median1.6964843550242206
deviation-center-line_median0.25183120651906316


other stats
agent_compute-ego0_max0.06218811166964895
agent_compute-ego0_mean0.06218811166964895
agent_compute-ego0_median0.06218811166964895
agent_compute-ego0_min0.06218811166964895
complete-iteration_max0.25962397916530205
complete-iteration_mean0.25962397916530205
complete-iteration_median0.25962397916530205
complete-iteration_min0.25962397916530205
deviation-center-line_max0.25183120651906316
deviation-center-line_mean0.25183120651906316
deviation-center-line_min0.25183120651906316
deviation-heading_max1.5217339938431118
deviation-heading_mean1.5217339938431118
deviation-heading_median1.5217339938431118
deviation-heading_min1.5217339938431118
driven_any_max2.6129515061792823
driven_any_mean2.6129515061792823
driven_any_median2.6129515061792823
driven_any_min2.6129515061792823
driven_lanedir_consec_max1.6964843550242206
driven_lanedir_consec_mean1.6964843550242206
driven_lanedir_consec_min1.6964843550242206
driven_lanedir_max1.6964843550242206
driven_lanedir_mean1.6964843550242206
driven_lanedir_median1.6964843550242206
driven_lanedir_min1.6964843550242206
get_duckie_state_max0.00516710630277308
get_duckie_state_mean0.00516710630277308
get_duckie_state_median0.00516710630277308
get_duckie_state_min0.00516710630277308
get_robot_state_max0.0044973768839022005
get_robot_state_mean0.0044973768839022005
get_robot_state_median0.0044973768839022005
get_robot_state_min0.0044973768839022005
get_state_dump_max0.006668552150571249
get_state_dump_mean0.006668552150571249
get_state_dump_median0.006668552150571249
get_state_dump_min0.006668552150571249
get_ui_image_max0.030485062095207897
get_ui_image_mean0.030485062095207897
get_ui_image_median0.030485062095207897
get_ui_image_min0.030485062095207897
in-drivable-lane_max1.649999999999995
in-drivable-lane_mean1.649999999999995
in-drivable-lane_min1.649999999999995
per-episodes
details{"LFP-norm-small_loop-000-ego0": {"driven_any": 2.6129515061792823, "get_ui_image": 0.030485062095207897, "step_physics": 0.13382185765398227, "survival_time": 6.099999999999986, "driven_lanedir": 1.6964843550242206, "get_state_dump": 0.006668552150571249, "get_robot_state": 0.0044973768839022005, "sim_render-ego0": 0.004486231299919811, "get_duckie_state": 0.00516710630277308, "in-drivable-lane": 1.649999999999995, "deviation-heading": 1.5217339938431118, "agent_compute-ego0": 0.06218811166964895, "complete-iteration": 0.25962397916530205, "set_robot_commands": 0.0028320657528512847, "deviation-center-line": 0.25183120651906316, "driven_lanedir_consec": 1.6964843550242206, "sim_compute_sim_state": 0.006951442578943764, "sim_compute_performance-ego0": 0.0024081013066981865}}
set_robot_commands_max0.0028320657528512847
set_robot_commands_mean0.0028320657528512847
set_robot_commands_median0.0028320657528512847
set_robot_commands_min0.0028320657528512847
sim_compute_performance-ego0_max0.0024081013066981865
sim_compute_performance-ego0_mean0.0024081013066981865
sim_compute_performance-ego0_median0.0024081013066981865
sim_compute_performance-ego0_min0.0024081013066981865
sim_compute_sim_state_max0.006951442578943764
sim_compute_sim_state_mean0.006951442578943764
sim_compute_sim_state_median0.006951442578943764
sim_compute_sim_state_min0.006951442578943764
sim_render-ego0_max0.004486231299919811
sim_render-ego0_mean0.004486231299919811
sim_render-ego0_median0.004486231299919811
sim_render-ego0_min0.004486231299919811
simulation-passed1
step_physics_max0.13382185765398227
step_physics_mean0.13382185765398227
step_physics_median0.13382185765398227
step_physics_min0.13382185765398227
survival_time_max6.099999999999986
survival_time_mean6.099999999999986
survival_time_min6.099999999999986
No reset possible
6616913578MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFV_multi-sim-validation402host-errornogpu-production-spot-2-020:01:06
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego1" aborted with the following error:

error in ego1 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(0, experiment_idx=0, checkpoint_idx=0, logger=context)
              ||   File "/submission/model.py", line 42, in __init__
              ||     dummy_env = wrap_env(config["env_config"], extra_config={
              ||   File "/submission/duckietown_utils/env.py", line 46, in wrap_env
              ||     env = SegmentObsWrapper(env, model=extra_config['model'])
              ||   File "/submission/duckietown_utils/wrappers/SegmentObsWrapper.py", line 43, in __init__
              ||     self.model.cuda()
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in cuda
              ||     return self._apply(lambda t: t.cuda(device))
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 381, in _apply
              ||     param_applied = fn(param)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in <lambda>
              ||     return self._apply(lambda t: t.cuda(device))
              || RuntimeError: CUDA error: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 274, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego1" aborted with the following error:

error in ego1 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(0, experiment_idx=0, checkpoint_idx=0, logger=context)
              ||   File "/submission/model.py", line 42, in __init__
              ||     dummy_env = wrap_env(config["env_config"], extra_config={
              ||   File "/submission/duckietown_utils/env.py", line 46, in wrap_env
              ||     env = SegmentObsWrapper(env, model=extra_config['model'])
              ||   File "/submission/duckietown_utils/wrappers/SegmentObsWrapper.py", line 43, in __init__
              ||     self.model.cuda()
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in cuda
              ||     return self._apply(lambda t: t.cuda(device))
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 381, in _apply
              ||     param_applied = fn(param)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in <lambda>
              ||     return self._apply(lambda t: t.cuda(device))
              || RuntimeError: CUDA error: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6615713578MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFV_multi-sim-validation402host-errornogpu-production-spot-2-020:01:08
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego1" aborted with the following error:

error in ego1 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(0, experiment_idx=0, checkpoint_idx=0, logger=context)
              ||   File "/submission/model.py", line 42, in __init__
              ||     dummy_env = wrap_env(config["env_config"], extra_config={
              ||   File "/submission/duckietown_utils/env.py", line 46, in wrap_env
              ||     env = SegmentObsWrapper(env, model=extra_config['model'])
              ||   File "/submission/duckietown_utils/wrappers/SegmentObsWrapper.py", line 43, in __init__
              ||     self.model.cuda()
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in cuda
              ||     return self._apply(lambda t: t.cuda(device))
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 381, in _apply
              ||     param_applied = fn(param)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in <lambda>
              ||     return self._apply(lambda t: t.cuda(device))
              || RuntimeError: CUDA error: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 274, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego1" aborted with the following error:

error in ego1 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(0, experiment_idx=0, checkpoint_idx=0, logger=context)
              ||   File "/submission/model.py", line 42, in __init__
              ||     dummy_env = wrap_env(config["env_config"], extra_config={
              ||   File "/submission/duckietown_utils/env.py", line 46, in wrap_env
              ||     env = SegmentObsWrapper(env, model=extra_config['model'])
              ||   File "/submission/duckietown_utils/wrappers/SegmentObsWrapper.py", line 43, in __init__
              ||     self.model.cuda()
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in cuda
              ||     return self._apply(lambda t: t.cuda(device))
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 381, in _apply
              ||     param_applied = fn(param)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in <lambda>
              ||     return self._apply(lambda t: t.cuda(device))
              || RuntimeError: CUDA error: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6613714034YU CHENCBC V2, mar28_apr6 bc, mar31_apr6 anomaly aido-LFP-sim-validationsim-3of4successnogpu-production-spot-2-020:03:12
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median11.60000000000003
in-drivable-lane_median3.6000000000000103
driven_lanedir_consec_median2.342764609546682
deviation-center-line_median0.5666368577688168


other stats
agent_compute-ego0_max0.098984693764617
agent_compute-ego0_mean0.098984693764617
agent_compute-ego0_median0.098984693764617
agent_compute-ego0_min0.098984693764617
complete-iteration_max0.3797480047004929
complete-iteration_mean0.3797480047004929
complete-iteration_median0.3797480047004929
complete-iteration_min0.3797480047004929
deviation-center-line_max0.5666368577688168
deviation-center-line_mean0.5666368577688168
deviation-center-line_min0.5666368577688168
deviation-heading_max2.5829280570292843
deviation-heading_mean2.5829280570292843
deviation-heading_median2.5829280570292843
deviation-heading_min2.5829280570292843
driven_any_max3.7781554669200257
driven_any_mean3.7781554669200257
driven_any_median3.7781554669200257
driven_any_min3.7781554669200257
driven_lanedir_consec_max2.342764609546682
driven_lanedir_consec_mean2.342764609546682
driven_lanedir_consec_min2.342764609546682
driven_lanedir_max2.342764609546682
driven_lanedir_mean2.342764609546682
driven_lanedir_median2.342764609546682
driven_lanedir_min2.342764609546682
get_duckie_state_max0.022889573174996437
get_duckie_state_mean0.022889573174996437
get_duckie_state_median0.022889573174996437
get_duckie_state_min0.022889573174996437
get_robot_state_max0.0041080298853534486
get_robot_state_mean0.0041080298853534486
get_robot_state_median0.0041080298853534486
get_robot_state_min0.0041080298853534486
get_state_dump_max0.009212237059302596
get_state_dump_mean0.009212237059302596
get_state_dump_median0.009212237059302596
get_state_dump_min0.009212237059302596
get_ui_image_max0.03953816962344452
get_ui_image_mean0.03953816962344452
get_ui_image_median0.03953816962344452
get_ui_image_min0.03953816962344452
in-drivable-lane_max3.6000000000000103
in-drivable-lane_mean3.6000000000000103
in-drivable-lane_min3.6000000000000103
per-episodes
details{"LFP-norm-techtrack-000-ego0": {"driven_any": 3.7781554669200257, "get_ui_image": 0.03953816962344452, "step_physics": 0.1821499484803032, "survival_time": 11.60000000000003, "driven_lanedir": 2.342764609546682, "get_state_dump": 0.009212237059302596, "get_robot_state": 0.0041080298853534486, "sim_render-ego0": 0.004157651647477703, "get_duckie_state": 0.022889573174996437, "in-drivable-lane": 3.6000000000000103, "deviation-heading": 2.5829280570292843, "agent_compute-ego0": 0.098984693764617, "complete-iteration": 0.3797480047004929, "set_robot_commands": 0.0025550531215422145, "deviation-center-line": 0.5666368577688168, "driven_lanedir_consec": 2.342764609546682, "sim_compute_sim_state": 0.013823461123290493, "sim_compute_performance-ego0": 0.002223985901206348}}
set_robot_commands_max0.0025550531215422145
set_robot_commands_mean0.0025550531215422145
set_robot_commands_median0.0025550531215422145
set_robot_commands_min0.0025550531215422145
sim_compute_performance-ego0_max0.002223985901206348
sim_compute_performance-ego0_mean0.002223985901206348
sim_compute_performance-ego0_median0.002223985901206348
sim_compute_performance-ego0_min0.002223985901206348
sim_compute_sim_state_max0.013823461123290493
sim_compute_sim_state_mean0.013823461123290493
sim_compute_sim_state_median0.013823461123290493
sim_compute_sim_state_min0.013823461123290493
sim_render-ego0_max0.004157651647477703
sim_render-ego0_mean0.004157651647477703
sim_render-ego0_median0.004157651647477703
sim_render-ego0_min0.004157651647477703
simulation-passed1
step_physics_max0.1821499484803032
step_physics_mean0.1821499484803032
step_physics_median0.1821499484803032
step_physics_min0.1821499484803032
survival_time_max11.60000000000003
survival_time_mean11.60000000000003
survival_time_min11.60000000000003
No reset possible
6612414034YU CHENCBC V2, mar28_apr6 bc, mar31_apr6 anomaly aido-LFP-sim-validationsim-3of4successnogpu-production-spot-2-020:02:09
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median5.999999999999987
in-drivable-lane_median1.5499999999999945
driven_lanedir_consec_median1.317198361539025
deviation-center-line_median0.2945423099444384


other stats
agent_compute-ego0_max0.09333802648812288
agent_compute-ego0_mean0.09333802648812288
agent_compute-ego0_median0.09333802648812288
agent_compute-ego0_min0.09333802648812288
complete-iteration_max0.34849298295895914
complete-iteration_mean0.34849298295895914
complete-iteration_median0.34849298295895914
complete-iteration_min0.34849298295895914
deviation-center-line_max0.2945423099444384
deviation-center-line_mean0.2945423099444384
deviation-center-line_min0.2945423099444384
deviation-heading_max1.1634326318101318
deviation-heading_mean1.1634326318101318
deviation-heading_median1.1634326318101318
deviation-heading_min1.1634326318101318
driven_any_max1.9083082869207575
driven_any_mean1.9083082869207575
driven_any_median1.9083082869207575
driven_any_min1.9083082869207575
driven_lanedir_consec_max1.317198361539025
driven_lanedir_consec_mean1.317198361539025
driven_lanedir_consec_min1.317198361539025
driven_lanedir_max1.317198361539025
driven_lanedir_mean1.317198361539025
driven_lanedir_median1.317198361539025
driven_lanedir_min1.317198361539025
get_duckie_state_max0.022054680122816857
get_duckie_state_mean0.022054680122816857
get_duckie_state_median0.022054680122816857
get_duckie_state_min0.022054680122816857
get_robot_state_max0.003969515650725562
get_robot_state_mean0.003969515650725562
get_robot_state_median0.003969515650725562
get_robot_state_min0.003969515650725562
get_state_dump_max0.00833893216345921
get_state_dump_mean0.00833893216345921
get_state_dump_median0.00833893216345921
get_state_dump_min0.00833893216345921
get_ui_image_max0.0387031775860747
get_ui_image_mean0.0387031775860747
get_ui_image_median0.0387031775860747
get_ui_image_min0.0387031775860747
in-drivable-lane_max1.5499999999999945
in-drivable-lane_mean1.5499999999999945
in-drivable-lane_min1.5499999999999945
per-episodes
details{"LFP-norm-techtrack-000-ego0": {"driven_any": 1.9083082869207575, "get_ui_image": 0.0387031775860747, "step_physics": 0.15973867069591174, "survival_time": 5.999999999999987, "driven_lanedir": 1.317198361539025, "get_state_dump": 0.00833893216345921, "get_robot_state": 0.003969515650725562, "sim_render-ego0": 0.0041350274046590505, "get_duckie_state": 0.022054680122816857, "in-drivable-lane": 1.5499999999999945, "deviation-heading": 1.1634326318101318, "agent_compute-ego0": 0.09333802648812288, "complete-iteration": 0.34849298295895914, "set_robot_commands": 0.002563598727391771, "deviation-center-line": 0.2945423099444384, "driven_lanedir_consec": 1.317198361539025, "sim_compute_sim_state": 0.013380685128456304, "sim_compute_performance-ego0": 0.002160411235714747}}
set_robot_commands_max0.002563598727391771
set_robot_commands_mean0.002563598727391771
set_robot_commands_median0.002563598727391771
set_robot_commands_min0.002563598727391771
sim_compute_performance-ego0_max0.002160411235714747
sim_compute_performance-ego0_mean0.002160411235714747
sim_compute_performance-ego0_median0.002160411235714747
sim_compute_performance-ego0_min0.002160411235714747
sim_compute_sim_state_max0.013380685128456304
sim_compute_sim_state_mean0.013380685128456304
sim_compute_sim_state_median0.013380685128456304
sim_compute_sim_state_min0.013380685128456304
sim_render-ego0_max0.0041350274046590505
sim_render-ego0_mean0.0041350274046590505
sim_render-ego0_median0.0041350274046590505
sim_render-ego0_min0.0041350274046590505
simulation-passed1
step_physics_max0.15973867069591174
step_physics_mean0.15973867069591174
step_physics_median0.15973867069591174
step_physics_min0.15973867069591174
survival_time_max5.999999999999987
survival_time_mean5.999999999999987
survival_time_min5.999999999999987
No reset possible
6612113504AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LF-sim-testingsim-1of4failednogpu-production-spot-2-020:00:41
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 275, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6611013570MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFP-sim-testingsim-1of4successnogpu-production-spot-2-020:02:12
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median3.049999999999997
in-drivable-lane_median0.0
driven_lanedir_consec_median0.9594206688584253
deviation-center-line_median0.1494880209150875


other stats
agent_compute-ego0_max0.045971297448681246
agent_compute-ego0_mean0.045971297448681246
agent_compute-ego0_median0.045971297448681246
agent_compute-ego0_min0.045971297448681246
complete-iteration_max0.217542952106845
complete-iteration_mean0.217542952106845
complete-iteration_median0.217542952106845
complete-iteration_min0.217542952106845
deviation-center-line_max0.1494880209150875
deviation-center-line_mean0.1494880209150875
deviation-center-line_min0.1494880209150875
deviation-heading_max0.6492339028294281
deviation-heading_mean0.6492339028294281
deviation-heading_median0.6492339028294281
deviation-heading_min0.6492339028294281
driven_any_max0.9757909729486524
driven_any_mean0.9757909729486524
driven_any_median0.9757909729486524
driven_any_min0.9757909729486524
driven_lanedir_consec_max0.9594206688584253
driven_lanedir_consec_mean0.9594206688584253
driven_lanedir_consec_min0.9594206688584253
driven_lanedir_max0.9594206688584253
driven_lanedir_mean0.9594206688584253
driven_lanedir_median0.9594206688584253
driven_lanedir_min0.9594206688584253
get_duckie_state_max0.004915821936822707
get_duckie_state_mean0.004915821936822707
get_duckie_state_median0.004915821936822707
get_duckie_state_min0.004915821936822707
get_robot_state_max0.003981790234965663
get_robot_state_mean0.003981790234965663
get_robot_state_median0.003981790234965663
get_robot_state_min0.003981790234965663
get_state_dump_max0.00587636040103051
get_state_dump_mean0.00587636040103051
get_state_dump_median0.00587636040103051
get_state_dump_min0.00587636040103051
get_ui_image_max0.0297349460663334
get_ui_image_mean0.0297349460663334
get_ui_image_median0.0297349460663334
get_ui_image_min0.0297349460663334
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFP-norm-small_loop-000-ego0": {"driven_any": 0.9757909729486524, "get_ui_image": 0.0297349460663334, "step_physics": 0.11056331665285173, "survival_time": 3.049999999999997, "driven_lanedir": 0.9594206688584253, "get_state_dump": 0.00587636040103051, "get_robot_state": 0.003981790234965663, "sim_render-ego0": 0.004395957916013656, "get_duckie_state": 0.004915821936822707, "in-drivable-lane": 0.0, "deviation-heading": 0.6492339028294281, "agent_compute-ego0": 0.045971297448681246, "complete-iteration": 0.217542952106845, "set_robot_commands": 0.0027842790849747197, "deviation-center-line": 0.1494880209150875, "driven_lanedir_consec": 0.9594206688584253, "sim_compute_sim_state": 0.007080401143720073, "sim_compute_performance-ego0": 0.00214447513703377}}
set_robot_commands_max0.0027842790849747197
set_robot_commands_mean0.0027842790849747197
set_robot_commands_median0.0027842790849747197
set_robot_commands_min0.0027842790849747197
sim_compute_performance-ego0_max0.00214447513703377
sim_compute_performance-ego0_mean0.00214447513703377
sim_compute_performance-ego0_median0.00214447513703377
sim_compute_performance-ego0_min0.00214447513703377
sim_compute_sim_state_max0.007080401143720073
sim_compute_sim_state_mean0.007080401143720073
sim_compute_sim_state_median0.007080401143720073
sim_compute_sim_state_min0.007080401143720073
sim_render-ego0_max0.004395957916013656
sim_render-ego0_mean0.004395957916013656
sim_render-ego0_median0.004395957916013656
sim_render-ego0_min0.004395957916013656
simulation-passed1
step_physics_max0.11056331665285173
step_physics_mean0.11056331665285173
step_physics_median0.11056331665285173
step_physics_min0.11056331665285173
survival_time_max3.049999999999997
survival_time_mean3.049999999999997
survival_time_min3.049999999999997
No reset possible
6609713511AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFP-sim-validationsim-0of4failednogpu-production-spot-2-020:02:39
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 216, in main
    raise InvalidSubmission(msg)
duckietown_challenges.exceptions.InvalidSubmission: Timeout during connection to ego0: <SignalTimeout in state: 2>
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6605913565MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LF-sim-testingsim-2of4successnogpu-production-spot-2-020:10:32
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median29.54958684979631
survival_time_median59.99999999999873
deviation-center-line_median3.600438644143271
in-drivable-lane_median0.0


other stats
agent_compute-ego0_max0.04619194010115981
agent_compute-ego0_mean0.04619194010115981
agent_compute-ego0_median0.04619194010115981
agent_compute-ego0_min0.04619194010115981
complete-iteration_max0.23288954743536985
complete-iteration_mean0.23288954743536985
complete-iteration_median0.23288954743536985
complete-iteration_min0.23288954743536985
deviation-center-line_max3.600438644143271
deviation-center-line_mean3.600438644143271
deviation-center-line_min3.600438644143271
deviation-heading_max10.035000177880152
deviation-heading_mean10.035000177880152
deviation-heading_median10.035000177880152
deviation-heading_min10.035000177880152
driven_any_max30.10165401092269
driven_any_mean30.10165401092269
driven_any_median30.10165401092269
driven_any_min30.10165401092269
driven_lanedir_consec_max29.54958684979631
driven_lanedir_consec_mean29.54958684979631
driven_lanedir_consec_min29.54958684979631
driven_lanedir_max29.54958684979631
driven_lanedir_mean29.54958684979631
driven_lanedir_median29.54958684979631
driven_lanedir_min29.54958684979631
get_duckie_state_max1.9806013019952448e-06
get_duckie_state_mean1.9806013019952448e-06
get_duckie_state_median1.9806013019952448e-06
get_duckie_state_min1.9806013019952448e-06
get_robot_state_max0.003952884554962234
get_robot_state_mean0.003952884554962234
get_robot_state_median0.003952884554962234
get_robot_state_min0.003952884554962234
get_state_dump_max0.004855296891694462
get_state_dump_mean0.004855296891694462
get_state_dump_median0.004855296891694462
get_state_dump_min0.004855296891694462
get_ui_image_max0.028588699162949333
get_ui_image_mean0.028588699162949333
get_ui_image_median0.028588699162949333
get_ui_image_min0.028588699162949333
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-small_loop-000-ego0": {"driven_any": 30.10165401092269, "get_ui_image": 0.028588699162949333, "step_physics": 0.1338893209865548, "survival_time": 59.99999999999873, "driven_lanedir": 29.54958684979631, "get_state_dump": 0.004855296891694462, "get_robot_state": 0.003952884554962234, "sim_render-ego0": 0.004049269186269235, "get_duckie_state": 1.9806013019952448e-06, "in-drivable-lane": 0.0, "deviation-heading": 10.035000177880152, "agent_compute-ego0": 0.04619194010115981, "complete-iteration": 0.23288954743536985, "set_robot_commands": 0.002467622764898677, "deviation-center-line": 3.600438644143271, "driven_lanedir_consec": 29.54958684979631, "sim_compute_sim_state": 0.006690084686088721, "sim_compute_performance-ego0": 0.0021141730776238106}}
set_robot_commands_max0.002467622764898677
set_robot_commands_mean0.002467622764898677
set_robot_commands_median0.002467622764898677
set_robot_commands_min0.002467622764898677
sim_compute_performance-ego0_max0.0021141730776238106
sim_compute_performance-ego0_mean0.0021141730776238106
sim_compute_performance-ego0_median0.0021141730776238106
sim_compute_performance-ego0_min0.0021141730776238106
sim_compute_sim_state_max0.006690084686088721
sim_compute_sim_state_mean0.006690084686088721
sim_compute_sim_state_median0.006690084686088721
sim_compute_sim_state_min0.006690084686088721
sim_render-ego0_max0.004049269186269235
sim_render-ego0_mean0.004049269186269235
sim_render-ego0_median0.004049269186269235
sim_render-ego0_min0.004049269186269235
simulation-passed1
step_physics_max0.1338893209865548
step_physics_mean0.1338893209865548
step_physics_median0.1338893209865548
step_physics_min0.1338893209865548
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6605113585Andras Beres202-1aido-LFP-sim-testingsim-2of4successnogpu-production-spot-2-020:01:12
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median1.7500000000000009
in-drivable-lane_median0.0
driven_lanedir_consec_median0.390135701726378
deviation-center-line_median0.11266712278315714


other stats
agent_compute-ego0_max0.016956335968441434
agent_compute-ego0_mean0.016956335968441434
agent_compute-ego0_median0.016956335968441434
agent_compute-ego0_min0.016956335968441434
complete-iteration_max0.19786355230543348
complete-iteration_mean0.19786355230543348
complete-iteration_median0.19786355230543348
complete-iteration_min0.19786355230543348
deviation-center-line_max0.11266712278315714
deviation-center-line_mean0.11266712278315714
deviation-center-line_min0.11266712278315714
deviation-heading_max0.1823824437593323
deviation-heading_mean0.1823824437593323
deviation-heading_median0.1823824437593323
deviation-heading_min0.1823824437593323
driven_any_max0.3942495053034371
driven_any_mean0.3942495053034371
driven_any_median0.3942495053034371
driven_any_min0.3942495053034371
driven_lanedir_consec_max0.390135701726378
driven_lanedir_consec_mean0.390135701726378
driven_lanedir_consec_min0.390135701726378
driven_lanedir_max0.390135701726378
driven_lanedir_mean0.390135701726378
driven_lanedir_median0.390135701726378
driven_lanedir_min0.390135701726378
get_duckie_state_max0.026029977533552386
get_duckie_state_mean0.026029977533552386
get_duckie_state_median0.026029977533552386
get_duckie_state_min0.026029977533552386
get_robot_state_max0.003912680678897434
get_robot_state_mean0.003912680678897434
get_robot_state_median0.003912680678897434
get_robot_state_min0.003912680678897434
get_state_dump_max0.009490595923529733
get_state_dump_mean0.009490595923529733
get_state_dump_median0.009490595923529733
get_state_dump_min0.009490595923529733
get_ui_image_max0.03360078732172648
get_ui_image_mean0.03360078732172648
get_ui_image_median0.03360078732172648
get_ui_image_min0.03360078732172648
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFP-norm-loop-000-ego0": {"driven_any": 0.3942495053034371, "get_ui_image": 0.03360078732172648, "step_physics": 0.093816081682841, "survival_time": 1.7500000000000009, "driven_lanedir": 0.390135701726378, "get_state_dump": 0.009490595923529733, "get_robot_state": 0.003912680678897434, "sim_render-ego0": 0.004023585054609511, "get_duckie_state": 0.026029977533552386, "in-drivable-lane": 0.0, "deviation-heading": 0.1823824437593323, "agent_compute-ego0": 0.016956335968441434, "complete-iteration": 0.19786355230543348, "set_robot_commands": 0.002390351560380724, "deviation-center-line": 0.11266712278315714, "driven_lanedir_consec": 0.390135701726378, "sim_compute_sim_state": 0.005484859148661296, "sim_compute_performance-ego0": 0.0020604266060723197}}
set_robot_commands_max0.002390351560380724
set_robot_commands_mean0.002390351560380724
set_robot_commands_median0.002390351560380724
set_robot_commands_min0.002390351560380724
sim_compute_performance-ego0_max0.0020604266060723197
sim_compute_performance-ego0_mean0.0020604266060723197
sim_compute_performance-ego0_median0.0020604266060723197
sim_compute_performance-ego0_min0.0020604266060723197
sim_compute_sim_state_max0.005484859148661296
sim_compute_sim_state_mean0.005484859148661296
sim_compute_sim_state_median0.005484859148661296
sim_compute_sim_state_min0.005484859148661296
sim_render-ego0_max0.004023585054609511
sim_render-ego0_mean0.004023585054609511
sim_render-ego0_median0.004023585054609511
sim_render-ego0_min0.004023585054609511
simulation-passed1
step_physics_max0.093816081682841
step_physics_mean0.093816081682841
step_physics_median0.093816081682841
step_physics_min0.093816081682841
survival_time_max1.7500000000000009
survival_time_mean1.7500000000000009
survival_time_min1.7500000000000009
No reset possible
6602813943YU CHENCBC Net v2 test - added mar 31 anomaly + mar 28 bc_v1aido-LF-sim-validationsim-2of4successnogpu-production-spot-2-020:10:29
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median12.144226819355596
survival_time_median59.99999999999873
deviation-center-line_median2.401075746402986
in-drivable-lane_median28.599999999999262


other stats
agent_compute-ego0_max0.09411165319215646
agent_compute-ego0_mean0.09411165319215646
agent_compute-ego0_median0.09411165319215646
agent_compute-ego0_min0.09411165319215646
complete-iteration_max0.28329885254096826
complete-iteration_mean0.28329885254096826
complete-iteration_median0.28329885254096826
complete-iteration_min0.28329885254096826
deviation-center-line_max2.401075746402986
deviation-center-line_mean2.401075746402986
deviation-center-line_min2.401075746402986
deviation-heading_max14.393844606301707
deviation-heading_mean14.393844606301707
deviation-heading_median14.393844606301707
deviation-heading_min14.393844606301707
driven_any_max24.221899790821737
driven_any_mean24.221899790821737
driven_any_median24.221899790821737
driven_any_min24.221899790821737
driven_lanedir_consec_max12.144226819355596
driven_lanedir_consec_mean12.144226819355596
driven_lanedir_consec_min12.144226819355596
driven_lanedir_max12.144226819355596
driven_lanedir_mean12.144226819355596
driven_lanedir_median12.144226819355596
driven_lanedir_min12.144226819355596
get_duckie_state_max1.336216033249473e-06
get_duckie_state_mean1.336216033249473e-06
get_duckie_state_median1.336216033249473e-06
get_duckie_state_min1.336216033249473e-06
get_robot_state_max0.004214745377819306
get_robot_state_mean0.004214745377819306
get_robot_state_median0.004214745377819306
get_robot_state_min0.004214745377819306
get_state_dump_max0.00511295829188516
get_state_dump_mean0.00511295829188516
get_state_dump_median0.00511295829188516
get_state_dump_min0.00511295829188516
get_ui_image_max0.029353340102869906
get_ui_image_mean0.029353340102869906
get_ui_image_median0.029353340102869906
get_ui_image_min0.029353340102869906
in-drivable-lane_max28.599999999999262
in-drivable-lane_mean28.599999999999262
in-drivable-lane_min28.599999999999262
per-episodes
details{"LF-norm-small_loop-000-ego0": {"driven_any": 24.221899790821737, "get_ui_image": 0.029353340102869906, "step_physics": 0.13414124009214173, "survival_time": 59.99999999999873, "driven_lanedir": 12.144226819355596, "get_state_dump": 0.00511295829188516, "get_robot_state": 0.004214745377819306, "sim_render-ego0": 0.004279373091126758, "get_duckie_state": 1.336216033249473e-06, "in-drivable-lane": 28.599999999999262, "deviation-heading": 14.393844606301707, "agent_compute-ego0": 0.09411165319215646, "complete-iteration": 0.28329885254096826, "set_robot_commands": 0.0027250237111545025, "deviation-center-line": 2.401075746402986, "driven_lanedir_consec": 12.144226819355596, "sim_compute_sim_state": 0.006983738159954697, "sim_compute_performance-ego0": 0.0022779386506092537}}
set_robot_commands_max0.0027250237111545025
set_robot_commands_mean0.0027250237111545025
set_robot_commands_median0.0027250237111545025
set_robot_commands_min0.0027250237111545025
sim_compute_performance-ego0_max0.0022779386506092537
sim_compute_performance-ego0_mean0.0022779386506092537
sim_compute_performance-ego0_median0.0022779386506092537
sim_compute_performance-ego0_min0.0022779386506092537
sim_compute_sim_state_max0.006983738159954697
sim_compute_sim_state_mean0.006983738159954697
sim_compute_sim_state_median0.006983738159954697
sim_compute_sim_state_min0.006983738159954697
sim_render-ego0_max0.004279373091126758
sim_render-ego0_mean0.004279373091126758
sim_render-ego0_median0.004279373091126758
sim_render-ego0_min0.004279373091126758
simulation-passed1
step_physics_max0.13414124009214173
step_physics_mean0.13414124009214173
step_physics_median0.13414124009214173
step_physics_min0.13414124009214173
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6597114013YU CHENCBC Net v2 test - APR 6 anomaly + mar 28 bcaido-LF-sim-validationsim-3of4successnogpu-production-spot-2-020:12:11
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median13.408189182845918
survival_time_median59.99999999999873
deviation-center-line_median2.6474376322364326
in-drivable-lane_median24.24999999999947


other stats
agent_compute-ego0_max0.0921596226148264
agent_compute-ego0_mean0.0921596226148264
agent_compute-ego0_median0.0921596226148264
agent_compute-ego0_min0.0921596226148264
complete-iteration_max0.34692408302046673
complete-iteration_mean0.34692408302046673
complete-iteration_median0.34692408302046673
complete-iteration_min0.34692408302046673
deviation-center-line_max2.6474376322364326
deviation-center-line_mean2.6474376322364326
deviation-center-line_min2.6474376322364326
deviation-heading_max13.67838566699588
deviation-heading_mean13.67838566699588
deviation-heading_median13.67838566699588
deviation-heading_min13.67838566699588
driven_any_max24.47667418383353
driven_any_mean24.47667418383353
driven_any_median24.47667418383353
driven_any_min24.47667418383353
driven_lanedir_consec_max13.408189182845918
driven_lanedir_consec_mean13.408189182845918
driven_lanedir_consec_min13.408189182845918
driven_lanedir_max13.408189182845918
driven_lanedir_mean13.408189182845918
driven_lanedir_median13.408189182845918
driven_lanedir_min13.408189182845918
get_duckie_state_max1.3022696743599084e-06
get_duckie_state_mean1.3022696743599084e-06
get_duckie_state_median1.3022696743599084e-06
get_duckie_state_min1.3022696743599084e-06
get_robot_state_max0.0041121700026411296
get_robot_state_mean0.0041121700026411296
get_robot_state_median0.0041121700026411296
get_robot_state_min0.0041121700026411296
get_state_dump_max0.005002861118237244
get_state_dump_mean0.005002861118237244
get_state_dump_median0.005002861118237244
get_state_dump_min0.005002861118237244
get_ui_image_max0.0405407932576887
get_ui_image_mean0.0405407932576887
get_ui_image_median0.0405407932576887
get_ui_image_min0.0405407932576887
in-drivable-lane_max24.24999999999947
in-drivable-lane_mean24.24999999999947
in-drivable-lane_min24.24999999999947
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 24.47667418383353, "get_ui_image": 0.0405407932576887, "step_physics": 0.1811900801900821, "survival_time": 59.99999999999873, "driven_lanedir": 13.408189182845918, "get_state_dump": 0.005002861118237244, "get_robot_state": 0.0041121700026411296, "sim_render-ego0": 0.0040875013225977865, "get_duckie_state": 1.3022696743599084e-06, "in-drivable-lane": 24.24999999999947, "deviation-heading": 13.67838566699588, "agent_compute-ego0": 0.0921596226148264, "complete-iteration": 0.34692408302046673, "set_robot_commands": 0.0025415168415993876, "deviation-center-line": 2.6474376322364326, "driven_lanedir_consec": 13.408189182845918, "sim_compute_sim_state": 0.015020441353866995, "sim_compute_performance-ego0": 0.002171868190082483}}
set_robot_commands_max0.0025415168415993876
set_robot_commands_mean0.0025415168415993876
set_robot_commands_median0.0025415168415993876
set_robot_commands_min0.0025415168415993876
sim_compute_performance-ego0_max0.002171868190082483
sim_compute_performance-ego0_mean0.002171868190082483
sim_compute_performance-ego0_median0.002171868190082483
sim_compute_performance-ego0_min0.002171868190082483
sim_compute_sim_state_max0.015020441353866995
sim_compute_sim_state_mean0.015020441353866995
sim_compute_sim_state_median0.015020441353866995
sim_compute_sim_state_min0.015020441353866995
sim_render-ego0_max0.0040875013225977865
sim_render-ego0_mean0.0040875013225977865
sim_render-ego0_median0.0040875013225977865
sim_render-ego0_min0.0040875013225977865
simulation-passed1
step_physics_max0.1811900801900821
step_physics_mean0.1811900801900821
step_physics_median0.1811900801900821
step_physics_min0.1811900801900821
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6589313587Andras Beres202-1aido-LFV-sim-validationsim-1of4successnogpu-production-spot-2-020:15:13
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median22.450000000000184
in-drivable-lane_median1.5500000000000158
driven_lanedir_consec_median9.201987852518002
deviation-center-line_median1.5132097132490432


other stats
agent_compute-ego0_max0.020109523137410483
agent_compute-ego0_mean0.020109523137410483
agent_compute-ego0_median0.020109523137410483
agent_compute-ego0_min0.020109523137410483
agent_compute-npc0_max0.05265624735090468
agent_compute-npc0_mean0.05265624735090468
agent_compute-npc0_median0.05265624735090468
agent_compute-npc0_min0.05265624735090468
agent_compute-npc1_max0.05374338573879666
agent_compute-npc1_mean0.05374338573879666
agent_compute-npc1_median0.05374338573879666
agent_compute-npc1_min0.05374338573879666
agent_compute-npc2_max0.049220169915093315
agent_compute-npc2_mean0.049220169915093315
agent_compute-npc2_median0.049220169915093315
agent_compute-npc2_min0.049220169915093315
agent_compute-npc3_max0.05412247392866346
agent_compute-npc3_mean0.05412247392866346
agent_compute-npc3_median0.05412247392866346
agent_compute-npc3_min0.05412247392866346
complete-iteration_max1.0400362210803562
complete-iteration_mean1.0400362210803562
complete-iteration_median1.0400362210803562
complete-iteration_min1.0400362210803562
deviation-center-line_max1.5132097132490432
deviation-center-line_mean1.5132097132490432
deviation-center-line_min1.5132097132490432
deviation-heading_max3.9934599394275985
deviation-heading_mean3.9934599394275985
deviation-heading_median3.9934599394275985
deviation-heading_min3.9934599394275985
driven_any_max9.996122968245825
driven_any_mean9.996122968245825
driven_any_median9.996122968245825
driven_any_min9.996122968245825
driven_lanedir_consec_max9.201987852518002
driven_lanedir_consec_mean9.201987852518002
driven_lanedir_consec_min9.201987852518002
driven_lanedir_max9.201987852518002
driven_lanedir_mean9.201987852518002
driven_lanedir_median9.201987852518002
driven_lanedir_min9.201987852518002
get_duckie_state_max2.1632512410481773e-06
get_duckie_state_mean2.1632512410481773e-06
get_duckie_state_median2.1632512410481773e-06
get_duckie_state_min2.1632512410481773e-06
get_robot_state_max0.02171021514468723
get_robot_state_mean0.02171021514468723
get_robot_state_median0.02171021514468723
get_robot_state_min0.02171021514468723
get_state_dump_max0.013366214964124892
get_state_dump_mean0.013366214964124892
get_state_dump_median0.013366214964124892
get_state_dump_min0.013366214964124892
get_ui_image_max0.06505189948611789
get_ui_image_mean0.06505189948611789
get_ui_image_median0.06505189948611789
get_ui_image_min0.06505189948611789
in-drivable-lane_max1.5500000000000158
in-drivable-lane_mean1.5500000000000158
in-drivable-lane_min1.5500000000000158
per-episodes
details{"LFV-norm-zigzag-000-ego0": {"driven_any": 9.996122968245825, "get_ui_image": 0.06505189948611789, "step_physics": 0.5845805173450046, "survival_time": 22.450000000000184, "driven_lanedir": 9.201987852518002, "get_state_dump": 0.013366214964124892, "get_robot_state": 0.02171021514468723, "sim_render-ego0": 0.004622848828633627, "sim_render-npc0": 0.004633201493157281, "sim_render-npc1": 0.00464402887556288, "sim_render-npc2": 0.004708585209316677, "sim_render-npc3": 0.0047344843546549475, "get_duckie_state": 2.1632512410481773e-06, "in-drivable-lane": 1.5500000000000158, "deviation-heading": 3.9934599394275985, "agent_compute-ego0": 0.020109523137410483, "agent_compute-npc0": 0.05265624735090468, "agent_compute-npc1": 0.05374338573879666, "agent_compute-npc2": 0.049220169915093315, "agent_compute-npc3": 0.05412247392866346, "complete-iteration": 1.0400362210803562, "set_robot_commands": 0.0029323148727416993, "deviation-center-line": 1.5132097132490432, "driven_lanedir_consec": 9.201987852518002, "sim_compute_sim_state": 0.07408254517449273, "sim_compute_performance-ego0": 0.0027005047268337675, "sim_compute_performance-npc0": 0.002633818520439996, "sim_compute_performance-npc1": 0.0025709777408176, "sim_compute_performance-npc2": 0.002675396071539985, "sim_compute_performance-npc3": 0.0026199960708618163}}
set_robot_commands_max0.0029323148727416993
set_robot_commands_mean0.0029323148727416993
set_robot_commands_median0.0029323148727416993
set_robot_commands_min0.0029323148727416993
sim_compute_performance-ego0_max0.0027005047268337675
sim_compute_performance-ego0_mean0.0027005047268337675
sim_compute_performance-ego0_median0.0027005047268337675
sim_compute_performance-ego0_min0.0027005047268337675
sim_compute_performance-npc0_max0.002633818520439996
sim_compute_performance-npc0_mean0.002633818520439996
sim_compute_performance-npc0_median0.002633818520439996
sim_compute_performance-npc0_min0.002633818520439996
sim_compute_performance-npc1_max0.0025709777408176
sim_compute_performance-npc1_mean0.0025709777408176
sim_compute_performance-npc1_median0.0025709777408176
sim_compute_performance-npc1_min0.0025709777408176
sim_compute_performance-npc2_max0.002675396071539985
sim_compute_performance-npc2_mean0.002675396071539985
sim_compute_performance-npc2_median0.002675396071539985
sim_compute_performance-npc2_min0.002675396071539985
sim_compute_performance-npc3_max0.0026199960708618163
sim_compute_performance-npc3_mean0.0026199960708618163
sim_compute_performance-npc3_median0.0026199960708618163
sim_compute_performance-npc3_min0.0026199960708618163
sim_compute_sim_state_max0.07408254517449273
sim_compute_sim_state_mean0.07408254517449273
sim_compute_sim_state_median0.07408254517449273
sim_compute_sim_state_min0.07408254517449273
sim_render-ego0_max0.004622848828633627
sim_render-ego0_mean0.004622848828633627
sim_render-ego0_median0.004622848828633627
sim_render-ego0_min0.004622848828633627
sim_render-npc0_max0.004633201493157281
sim_render-npc0_mean0.004633201493157281
sim_render-npc0_median0.004633201493157281
sim_render-npc0_min0.004633201493157281
sim_render-npc1_max0.00464402887556288
sim_render-npc1_mean0.00464402887556288
sim_render-npc1_median0.00464402887556288
sim_render-npc1_min0.00464402887556288
sim_render-npc2_max0.004708585209316677
sim_render-npc2_mean0.004708585209316677
sim_render-npc2_median0.004708585209316677
sim_render-npc2_min0.004708585209316677
sim_render-npc3_max0.0047344843546549475
sim_render-npc3_mean0.0047344843546549475
sim_render-npc3_median0.0047344843546549475
sim_render-npc3_min0.0047344843546549475
simulation-passed1
step_physics_max0.5845805173450046
step_physics_mean0.5845805173450046
step_physics_median0.5845805173450046
step_physics_min0.5845805173450046
survival_time_max22.450000000000184
survival_time_mean22.450000000000184
survival_time_min22.450000000000184
No reset possible