Duckietown Challenges Home Challenges Submissions

Evaluator 5107

ID5107
evaluatorgpu-production-spot-3-02
ownerI don't have one πŸ˜€
machinegpu-prod_6675a37ff70d
processgpu-production-spot-3-02_6675a37ff70d
version6.2.7
first heard
last heard
statusinactive
# evaluating
# success14 65895
# timeout
# failed9 66099
# error
# aborted2 66322
# host-error
arm0
x86_641
Mac0
gpu available1
Number of processors64
Processor frequency (MHz)0.0 GHz
Free % of processors100%
RAM total (MB)249.0 GB
RAM free (MB)238.8 GB
Disk (MB)969.3 GB
Disk available (MB)862.2 GB
Docker Hub
P11
P2
Cloud simulations1
PI Camera0
# Duckiebots0
Map 3x3 avaiable
Number of duckies
gpu cores
AIDO 2 Map LF public
AIDO 2 Map LF private
AIDO 2 Map LFV public
AIDO 2 Map LFV private
AIDO 2 Map LFVI public
AIDO 2 Map LFVI private
AIDO 3 Map LF public
AIDO 3 Map LF private
AIDO 3 Map LFV public
AIDO 3 Map LFV private
AIDO 3 Map LFVI public
AIDO 3 Map LFVI private
AIDO 5 Map large loop
ETU track
for 2021, map is ETH_small_inter
IPFS mountpoint /ipfs available
IPNS mountpoint /ipns available

Evaluator jobs

Job IDsubmissionuseruser labelchallengestepstatusup to dateevaluatordate starteddate completeddurationmessage
6633814114DErek Wtemplate-randomaido-hello-sim-validation370successnogpu-production-spot-3-020:01:07
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median2.1500000000000004
in-drivable-lane_median1.15
driven_lanedir_consec_median0.2357331065993169
deviation-center-line_median0.05863921966719952


other stats
agent_compute-ego0_max0.011774187738245184
agent_compute-ego0_mean0.011774187738245184
agent_compute-ego0_median0.011774187738245184
agent_compute-ego0_min0.011774187738245184
complete-iteration_max0.12963846596804532
complete-iteration_mean0.12963846596804532
complete-iteration_median0.12963846596804532
complete-iteration_min0.12963846596804532
deviation-center-line_max0.05863921966719952
deviation-center-line_mean0.05863921966719952
deviation-center-line_min0.05863921966719952
deviation-heading_max0.46108604520141144
deviation-heading_mean0.46108604520141144
deviation-heading_median0.46108604520141144
deviation-heading_min0.46108604520141144
driven_any_max0.4603380961060899
driven_any_mean0.4603380961060899
driven_any_median0.4603380961060899
driven_any_min0.4603380961060899
driven_lanedir_consec_max0.2357331065993169
driven_lanedir_consec_mean0.2357331065993169
driven_lanedir_consec_min0.2357331065993169
driven_lanedir_max0.2357331065993169
driven_lanedir_mean0.2357331065993169
driven_lanedir_median0.2357331065993169
driven_lanedir_min0.2357331065993169
get_duckie_state_max0.004297153516249223
get_duckie_state_mean0.004297153516249223
get_duckie_state_median0.004297153516249223
get_duckie_state_min0.004297153516249223
get_robot_state_max0.0036408413540233264
get_robot_state_mean0.0036408413540233264
get_robot_state_median0.0036408413540233264
get_robot_state_min0.0036408413540233264
get_state_dump_max0.0053886229341680355
get_state_dump_mean0.0053886229341680355
get_state_dump_median0.0053886229341680355
get_state_dump_min0.0053886229341680355
get_ui_image_max0.026412920518354935
get_ui_image_mean0.026412920518354935
get_ui_image_median0.026412920518354935
get_ui_image_min0.026412920518354935
in-drivable-lane_max1.15
in-drivable-lane_mean1.15
in-drivable-lane_min1.15
per-episodes
details{"hello-norm-small_loop-000-ego0": {"driven_any": 0.4603380961060899, "get_ui_image": 0.026412920518354935, "step_physics": 0.06503100286830556, "survival_time": 2.1500000000000004, "driven_lanedir": 0.2357331065993169, "get_state_dump": 0.0053886229341680355, "get_robot_state": 0.0036408413540233264, "sim_render-ego0": 0.0038293979384682393, "get_duckie_state": 0.004297153516249223, "in-drivable-lane": 1.15, "deviation-heading": 0.46108604520141144, "agent_compute-ego0": 0.011774187738245184, "complete-iteration": 0.12963846596804532, "set_robot_commands": 0.0022027546709234066, "deviation-center-line": 0.05863921966719952, "driven_lanedir_consec": 0.2357331065993169, "sim_compute_sim_state": 0.00495512918992476, "sim_compute_performance-ego0": 0.002023268829692494}}
set_robot_commands_max0.0022027546709234066
set_robot_commands_mean0.0022027546709234066
set_robot_commands_median0.0022027546709234066
set_robot_commands_min0.0022027546709234066
sim_compute_performance-ego0_max0.002023268829692494
sim_compute_performance-ego0_mean0.002023268829692494
sim_compute_performance-ego0_median0.002023268829692494
sim_compute_performance-ego0_min0.002023268829692494
sim_compute_sim_state_max0.00495512918992476
sim_compute_sim_state_mean0.00495512918992476
sim_compute_sim_state_median0.00495512918992476
sim_compute_sim_state_min0.00495512918992476
sim_render-ego0_max0.0038293979384682393
sim_render-ego0_mean0.0038293979384682393
sim_render-ego0_median0.0038293979384682393
sim_render-ego0_min0.0038293979384682393
simulation-passed1
step_physics_max0.06503100286830556
step_physics_mean0.06503100286830556
step_physics_median0.06503100286830556
step_physics_min0.06503100286830556
survival_time_max2.1500000000000004
survival_time_mean2.1500000000000004
survival_time_min2.1500000000000004
No reset possible
6632313798Nicholas Kostelniktemplate-randomaido-hello-sim-validation370abortednogpu-production-spot-3-020:00:22
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 261, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/create?tag=sha256%3Ab13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691&fromImage=docker.io%2Fnitaigao%2Faido-submissions

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 65, in docker_pull
    pulling = client.api.pull(repository=repository, tag=br.tag, stream=True, decode=True)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/image.py", line 415, in pull
    self._raise_for_status(response)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 263, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/local/lib/python3.8/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.ImageNotFound: 404 Client Error: Not Found ("pull access denied for nitaigao/aido-submissions, repository does not exist or may require 'docker login': denied: requested access to the resource is denied")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 38, in docker_pull_retry
    return docker_pull(client, image_name, quiet=quiet)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 84, in docker_pull
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: Cannot pull repo  docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691  tag  None

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 745, in get_cr
    cr = run_single(
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 944, in run_single
    docker_pull_retry(client, image, ntimes=4, wait=5)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 42, in docker_pull_retry
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: After trying 4 I still could not pull docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6632213798Nicholas Kostelniktemplate-randomaido-hello-sim-validation370abortednogpu-production-spot-3-020:00:46
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 261, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/create?tag=sha256%3Ab13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691&fromImage=docker.io%2Fnitaigao%2Faido-submissions

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 65, in docker_pull
    pulling = client.api.pull(repository=repository, tag=br.tag, stream=True, decode=True)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/image.py", line 415, in pull
    self._raise_for_status(response)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 263, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/local/lib/python3.8/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.ImageNotFound: 404 Client Error: Not Found ("pull access denied for nitaigao/aido-submissions, repository does not exist or may require 'docker login': denied: requested access to the resource is denied")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 38, in docker_pull_retry
    return docker_pull(client, image_name, quiet=quiet)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 84, in docker_pull
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: Cannot pull repo  docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691  tag  None

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 745, in get_cr
    cr = run_single(
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 944, in run_single
    docker_pull_retry(client, image, ntimes=4, wait=5)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 42, in docker_pull_retry
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: After trying 4 I still could not pull docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6630413939YU CHENCBC Net v2 test - added mar 31 datasetaido-LFP-sim-validationsim-3of4successnogpu-production-spot-3-020:03:22
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median11.950000000000037
in-drivable-lane_median5.7000000000000135
driven_lanedir_consec_median1.9331548837222885
deviation-center-line_median0.4823324447270163


other stats
agent_compute-ego0_max0.09521088699499766
agent_compute-ego0_mean0.09521088699499766
agent_compute-ego0_median0.09521088699499766
agent_compute-ego0_min0.09521088699499766
complete-iteration_max0.36000155806541445
complete-iteration_mean0.36000155806541445
complete-iteration_median0.36000155806541445
complete-iteration_min0.36000155806541445
deviation-center-line_max0.4823324447270163
deviation-center-line_mean0.4823324447270163
deviation-center-line_min0.4823324447270163
deviation-heading_max2.1065653199501715
deviation-heading_mean2.1065653199501715
deviation-heading_median2.1065653199501715
deviation-heading_min2.1065653199501715
driven_any_max3.907494338185262
driven_any_mean3.907494338185262
driven_any_median3.907494338185262
driven_any_min3.907494338185262
driven_lanedir_consec_max1.9331548837222885
driven_lanedir_consec_mean1.9331548837222885
driven_lanedir_consec_min1.9331548837222885
driven_lanedir_max1.9331548837222885
driven_lanedir_mean1.9331548837222885
driven_lanedir_median1.9331548837222885
driven_lanedir_min1.9331548837222885
get_duckie_state_max0.02235761781533559
get_duckie_state_mean0.02235761781533559
get_duckie_state_median0.02235761781533559
get_duckie_state_min0.02235761781533559
get_robot_state_max0.004038826624552409
get_robot_state_mean0.004038826624552409
get_robot_state_median0.004038826624552409
get_robot_state_min0.004038826624552409
get_state_dump_max0.008656278252601624
get_state_dump_mean0.008656278252601624
get_state_dump_median0.008656278252601624
get_state_dump_min0.008656278252601624
get_ui_image_max0.03773196339607239
get_ui_image_mean0.03773196339607239
get_ui_image_median0.03773196339607239
get_ui_image_min0.03773196339607239
in-drivable-lane_max5.7000000000000135
in-drivable-lane_mean5.7000000000000135
in-drivable-lane_min5.7000000000000135
per-episodes
details{"LFP-norm-techtrack-000-ego0": {"driven_any": 3.907494338185262, "get_ui_image": 0.03773196339607239, "step_physics": 0.17029369274775188, "survival_time": 11.950000000000037, "driven_lanedir": 1.9331548837222885, "get_state_dump": 0.008656278252601624, "get_robot_state": 0.004038826624552409, "sim_render-ego0": 0.004098726312319437, "get_duckie_state": 0.02235761781533559, "in-drivable-lane": 5.7000000000000135, "deviation-heading": 2.1065653199501715, "agent_compute-ego0": 0.09521088699499766, "complete-iteration": 0.36000155806541445, "set_robot_commands": 0.002531661589940389, "deviation-center-line": 0.4823324447270163, "driven_lanedir_consec": 1.9331548837222885, "sim_compute_sim_state": 0.012783519426981608, "sim_compute_performance-ego0": 0.002183126409848531}}
set_robot_commands_max0.002531661589940389
set_robot_commands_mean0.002531661589940389
set_robot_commands_median0.002531661589940389
set_robot_commands_min0.002531661589940389
sim_compute_performance-ego0_max0.002183126409848531
sim_compute_performance-ego0_mean0.002183126409848531
sim_compute_performance-ego0_median0.002183126409848531
sim_compute_performance-ego0_min0.002183126409848531
sim_compute_sim_state_max0.012783519426981608
sim_compute_sim_state_mean0.012783519426981608
sim_compute_sim_state_median0.012783519426981608
sim_compute_sim_state_min0.012783519426981608
sim_render-ego0_max0.004098726312319437
sim_render-ego0_mean0.004098726312319437
sim_render-ego0_median0.004098726312319437
sim_render-ego0_min0.004098726312319437
simulation-passed1
step_physics_max0.17029369274775188
step_physics_mean0.17029369274775188
step_physics_median0.17029369274775188
step_physics_min0.17029369274775188
survival_time_max11.950000000000037
survival_time_mean11.950000000000037
survival_time_min11.950000000000037
No reset possible
6624813697Samuel Alexandertemplate-pytorchaido-LF-sim-validationsim-1of4successnogpu-production-spot-3-020:13:16
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median4.127781834752117
survival_time_median59.99999999999873
deviation-center-line_median2.3264233719479277
in-drivable-lane_median29.599999999999103


other stats
agent_compute-ego0_max0.016040843293430605
agent_compute-ego0_mean0.016040843293430605
agent_compute-ego0_median0.016040843293430605
agent_compute-ego0_min0.016040843293430605
complete-iteration_max0.31990106874064145
complete-iteration_mean0.31990106874064145
complete-iteration_median0.31990106874064145
complete-iteration_min0.31990106874064145
deviation-center-line_max2.3264233719479277
deviation-center-line_mean2.3264233719479277
deviation-center-line_min2.3264233719479277
deviation-heading_max20.09676363891244
deviation-heading_mean20.09676363891244
deviation-heading_median20.09676363891244
deviation-heading_min20.09676363891244
driven_any_max12.002325324693215
driven_any_mean12.002325324693215
driven_any_median12.002325324693215
driven_any_min12.002325324693215
driven_lanedir_consec_max4.127781834752117
driven_lanedir_consec_mean4.127781834752117
driven_lanedir_consec_min4.127781834752117
driven_lanedir_max4.131257763269058
driven_lanedir_mean4.131257763269058
driven_lanedir_median4.131257763269058
driven_lanedir_min4.131257763269058
get_duckie_state_max1.300681540610689e-06
get_duckie_state_mean1.300681540610689e-06
get_duckie_state_median1.300681540610689e-06
get_duckie_state_min1.300681540610689e-06
get_robot_state_max0.004189078952748015
get_robot_state_mean0.004189078952748015
get_robot_state_median0.004189078952748015
get_robot_state_min0.004189078952748015
get_state_dump_max0.005134905307715779
get_state_dump_mean0.005134905307715779
get_state_dump_median0.005134905307715779
get_state_dump_min0.005134905307715779
get_ui_image_max0.0364597701311707
get_ui_image_mean0.0364597701311707
get_ui_image_median0.0364597701311707
get_ui_image_min0.0364597701311707
in-drivable-lane_max29.599999999999103
in-drivable-lane_mean29.599999999999103
in-drivable-lane_min29.599999999999103
per-episodes
details{"LF-norm-techtrack-000-ego0": {"driven_any": 12.002325324693215, "get_ui_image": 0.0364597701311707, "step_physics": 0.23903354280298697, "survival_time": 59.99999999999873, "driven_lanedir": 4.131257763269058, "get_state_dump": 0.005134905307715779, "get_robot_state": 0.004189078952748015, "sim_render-ego0": 0.004129716498369381, "get_duckie_state": 1.300681540610689e-06, "in-drivable-lane": 29.599999999999103, "deviation-heading": 20.09676363891244, "agent_compute-ego0": 0.016040843293430605, "complete-iteration": 0.31990106874064145, "set_robot_commands": 0.0025434577395576524, "deviation-center-line": 2.3264233719479277, "driven_lanedir_consec": 4.127781834752117, "sim_compute_sim_state": 0.010059627267740647, "sim_compute_performance-ego0": 0.0022208436541910673}}
set_robot_commands_max0.0025434577395576524
set_robot_commands_mean0.0025434577395576524
set_robot_commands_median0.0025434577395576524
set_robot_commands_min0.0025434577395576524
sim_compute_performance-ego0_max0.0022208436541910673
sim_compute_performance-ego0_mean0.0022208436541910673
sim_compute_performance-ego0_median0.0022208436541910673
sim_compute_performance-ego0_min0.0022208436541910673
sim_compute_sim_state_max0.010059627267740647
sim_compute_sim_state_mean0.010059627267740647
sim_compute_sim_state_median0.010059627267740647
sim_compute_sim_state_min0.010059627267740647
sim_render-ego0_max0.004129716498369381
sim_render-ego0_mean0.004129716498369381
sim_render-ego0_median0.004129716498369381
sim_render-ego0_min0.004129716498369381
simulation-passed1
step_physics_max0.23903354280298697
step_physics_mean0.23903354280298697
step_physics_median0.23903354280298697
step_physics_min0.23903354280298697
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6622213909YU CHENCBC Net v2 - testaido-LF-sim-validationsim-0of4successnogpu-production-spot-3-020:10:42
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median17.207530508634708
survival_time_median59.99999999999873
deviation-center-line_median3.323691825267697
in-drivable-lane_median19.29999999999955


other stats
agent_compute-ego0_max0.0919982416246654
agent_compute-ego0_mean0.0919982416246654
agent_compute-ego0_median0.0919982416246654
agent_compute-ego0_min0.0919982416246654
complete-iteration_max0.2914741517701415
complete-iteration_mean0.2914741517701415
complete-iteration_median0.2914741517701415
complete-iteration_min0.2914741517701415
deviation-center-line_max3.323691825267697
deviation-center-line_mean3.323691825267697
deviation-center-line_min3.323691825267697
deviation-heading_max10.148357250775245
deviation-heading_mean10.148357250775245
deviation-heading_median10.148357250775245
deviation-heading_min10.148357250775245
driven_any_max27.16715292724321
driven_any_mean27.16715292724321
driven_any_median27.16715292724321
driven_any_min27.16715292724321
driven_lanedir_consec_max17.207530508634708
driven_lanedir_consec_mean17.207530508634708
driven_lanedir_consec_min17.207530508634708
driven_lanedir_max17.207530508634708
driven_lanedir_mean17.207530508634708
driven_lanedir_median17.207530508634708
driven_lanedir_min17.207530508634708
get_duckie_state_max1.2480746101677964e-06
get_duckie_state_mean1.2480746101677964e-06
get_duckie_state_median1.2480746101677964e-06
get_duckie_state_min1.2480746101677964e-06
get_robot_state_max0.004081615897439898
get_robot_state_mean0.004081615897439898
get_robot_state_median0.004081615897439898
get_robot_state_min0.004081615897439898
get_state_dump_max0.005119840469487403
get_state_dump_mean0.005119840469487403
get_state_dump_median0.005119840469487403
get_state_dump_min0.005119840469487403
get_ui_image_max0.0317077253581483
get_ui_image_mean0.0317077253581483
get_ui_image_median0.0317077253581483
get_ui_image_min0.0317077253581483
in-drivable-lane_max19.29999999999955
in-drivable-lane_mean19.29999999999955
in-drivable-lane_min19.29999999999955
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 27.16715292724321, "get_ui_image": 0.0317077253581483, "step_physics": 0.13940853897876088, "survival_time": 59.99999999999873, "driven_lanedir": 17.207530508634708, "get_state_dump": 0.005119840469487403, "get_robot_state": 0.004081615897439898, "sim_render-ego0": 0.004215800494178944, "get_duckie_state": 1.2480746101677964e-06, "in-drivable-lane": 19.29999999999955, "deviation-heading": 10.148357250775245, "agent_compute-ego0": 0.0919982416246654, "complete-iteration": 0.2914741517701415, "set_robot_commands": 0.002604154821835787, "deviation-center-line": 3.323691825267697, "driven_lanedir_consec": 17.207530508634708, "sim_compute_sim_state": 0.010028640793126192, "sim_compute_performance-ego0": 0.0022162604987075387}}
set_robot_commands_max0.002604154821835787
set_robot_commands_mean0.002604154821835787
set_robot_commands_median0.002604154821835787
set_robot_commands_min0.002604154821835787
sim_compute_performance-ego0_max0.0022162604987075387
sim_compute_performance-ego0_mean0.0022162604987075387
sim_compute_performance-ego0_median0.0022162604987075387
sim_compute_performance-ego0_min0.0022162604987075387
sim_compute_sim_state_max0.010028640793126192
sim_compute_sim_state_mean0.010028640793126192
sim_compute_sim_state_median0.010028640793126192
sim_compute_sim_state_min0.010028640793126192
sim_render-ego0_max0.004215800494178944
sim_render-ego0_mean0.004215800494178944
sim_render-ego0_median0.004215800494178944
sim_render-ego0_min0.004215800494178944
simulation-passed1
step_physics_max0.13940853897876088
step_physics_mean0.13940853897876088
step_physics_median0.13940853897876088
step_physics_min0.13940853897876088
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6620813965YU CHENCBC Net v2 test - APR 3 BC TFdata + mar 28 anomalyaido-LFP-sim-validationsim-0of4successnogpu-production-spot-3-020:03:18
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median11.55000000000003
in-drivable-lane_median7.950000000000035
driven_lanedir_consec_median1.1032126695354614
deviation-center-line_median0.3931564044607345


other stats
agent_compute-ego0_max0.09877325645808516
agent_compute-ego0_mean0.09877325645808516
agent_compute-ego0_median0.09877325645808516
agent_compute-ego0_min0.09877325645808516
complete-iteration_max0.358150452375412
complete-iteration_mean0.358150452375412
complete-iteration_median0.358150452375412
complete-iteration_min0.358150452375412
deviation-center-line_max0.3931564044607345
deviation-center-line_mean0.3931564044607345
deviation-center-line_min0.3931564044607345
deviation-heading_max1.1866220717176323
deviation-heading_mean1.1866220717176323
deviation-heading_median1.1866220717176323
deviation-heading_min1.1866220717176323
driven_any_max4.129407272044396
driven_any_mean4.129407272044396
driven_any_median4.129407272044396
driven_any_min4.129407272044396
driven_lanedir_consec_max1.1032126695354614
driven_lanedir_consec_mean1.1032126695354614
driven_lanedir_consec_min1.1032126695354614
driven_lanedir_max1.1032126695354614
driven_lanedir_mean1.1032126695354614
driven_lanedir_median1.1032126695354614
driven_lanedir_min1.1032126695354614
get_duckie_state_max0.02298124699757017
get_duckie_state_mean0.02298124699757017
get_duckie_state_median0.02298124699757017
get_duckie_state_min0.02298124699757017
get_robot_state_max0.004114238352611147
get_robot_state_mean0.004114238352611147
get_robot_state_median0.004114238352611147
get_robot_state_min0.004114238352611147
get_state_dump_max0.0086051554515444
get_state_dump_mean0.0086051554515444
get_state_dump_median0.0086051554515444
get_state_dump_min0.0086051554515444
get_ui_image_max0.04286635024794217
get_ui_image_mean0.04286635024794217
get_ui_image_median0.04286635024794217
get_ui_image_min0.04286635024794217
in-drivable-lane_max7.950000000000035
in-drivable-lane_mean7.950000000000035
in-drivable-lane_min7.950000000000035
per-episodes
details{"LFP-norm-zigzag-000-ego0": {"driven_any": 4.129407272044396, "get_ui_image": 0.04286635024794217, "step_physics": 0.1590693644408522, "survival_time": 11.55000000000003, "driven_lanedir": 1.1032126695354614, "get_state_dump": 0.0086051554515444, "get_robot_state": 0.004114238352611147, "sim_render-ego0": 0.004241197273649019, "get_duckie_state": 0.02298124699757017, "in-drivable-lane": 7.950000000000035, "deviation-heading": 1.1866220717176323, "agent_compute-ego0": 0.09877325645808516, "complete-iteration": 0.358150452375412, "set_robot_commands": 0.0026648959209179058, "deviation-center-line": 0.3931564044607345, "driven_lanedir_consec": 1.1032126695354614, "sim_compute_sim_state": 0.012495326584783096, "sim_compute_performance-ego0": 0.0022313194028262436}}
set_robot_commands_max0.0026648959209179058
set_robot_commands_mean0.0026648959209179058
set_robot_commands_median0.0026648959209179058
set_robot_commands_min0.0026648959209179058
sim_compute_performance-ego0_max0.0022313194028262436
sim_compute_performance-ego0_mean0.0022313194028262436
sim_compute_performance-ego0_median0.0022313194028262436
sim_compute_performance-ego0_min0.0022313194028262436
sim_compute_sim_state_max0.012495326584783096
sim_compute_sim_state_mean0.012495326584783096
sim_compute_sim_state_median0.012495326584783096
sim_compute_sim_state_min0.012495326584783096
sim_render-ego0_max0.004241197273649019
sim_render-ego0_mean0.004241197273649019
sim_render-ego0_median0.004241197273649019
sim_render-ego0_min0.004241197273649019
simulation-passed1
step_physics_max0.1590693644408522
step_physics_mean0.1590693644408522
step_physics_median0.1590693644408522
step_physics_min0.1590693644408522
survival_time_max11.55000000000003
survival_time_mean11.55000000000003
survival_time_min11.55000000000003
No reset possible
6619613965YU CHENCBC Net v2 test - APR 3 BC TFdata + mar 28 anomalyaido-LFP-sim-validationsim-0of4successnogpu-production-spot-3-020:01:19
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median2.3499999999999996
in-drivable-lane_median0.05000000000000005
driven_lanedir_consec_median0.47411781686864174
deviation-center-line_median0.21237908660753357


other stats
agent_compute-ego0_max0.08912438650925954
agent_compute-ego0_mean0.08912438650925954
agent_compute-ego0_median0.08912438650925954
agent_compute-ego0_min0.08912438650925954
complete-iteration_max0.3027915010849635
complete-iteration_mean0.3027915010849635
complete-iteration_median0.3027915010849635
complete-iteration_min0.3027915010849635
deviation-center-line_max0.21237908660753357
deviation-center-line_mean0.21237908660753357
deviation-center-line_min0.21237908660753357
deviation-heading_max0.9111143197634533
deviation-heading_mean0.9111143197634533
deviation-heading_median0.9111143197634533
deviation-heading_min0.9111143197634533
driven_any_max0.5252436381308034
driven_any_mean0.5252436381308034
driven_any_median0.5252436381308034
driven_any_min0.5252436381308034
driven_lanedir_consec_max0.47411781686864174
driven_lanedir_consec_mean0.47411781686864174
driven_lanedir_consec_min0.47411781686864174
driven_lanedir_max0.47411781686864174
driven_lanedir_mean0.47411781686864174
driven_lanedir_median0.47411781686864174
driven_lanedir_min0.47411781686864174
get_duckie_state_max0.020732422669728596
get_duckie_state_mean0.020732422669728596
get_duckie_state_median0.020732422669728596
get_duckie_state_min0.020732422669728596
get_robot_state_max0.003679409623146057
get_robot_state_mean0.003679409623146057
get_robot_state_median0.003679409623146057
get_robot_state_min0.003679409623146057
get_state_dump_max0.00810229778289795
get_state_dump_mean0.00810229778289795
get_state_dump_median0.00810229778289795
get_state_dump_min0.00810229778289795
get_ui_image_max0.04047229389349619
get_ui_image_mean0.04047229389349619
get_ui_image_median0.04047229389349619
get_ui_image_min0.04047229389349619
in-drivable-lane_max0.05000000000000005
in-drivable-lane_mean0.05000000000000005
in-drivable-lane_min0.05000000000000005
per-episodes
details{"LFP-norm-zigzag-000-ego0": {"driven_any": 0.5252436381308034, "get_ui_image": 0.04047229389349619, "step_physics": 0.12325105567773184, "survival_time": 2.3499999999999996, "driven_lanedir": 0.47411781686864174, "get_state_dump": 0.00810229778289795, "get_robot_state": 0.003679409623146057, "sim_render-ego0": 0.003929495811462402, "get_duckie_state": 0.020732422669728596, "in-drivable-lane": 0.05000000000000005, "deviation-heading": 0.9111143197634533, "agent_compute-ego0": 0.08912438650925954, "complete-iteration": 0.3027915010849635, "set_robot_commands": 0.002379998564720154, "deviation-center-line": 0.21237908660753357, "driven_lanedir_consec": 0.47411781686864174, "sim_compute_sim_state": 0.008940577507019043, "sim_compute_performance-ego0": 0.0020694335301717124}}
set_robot_commands_max0.002379998564720154
set_robot_commands_mean0.002379998564720154
set_robot_commands_median0.002379998564720154
set_robot_commands_min0.002379998564720154
sim_compute_performance-ego0_max0.0020694335301717124
sim_compute_performance-ego0_mean0.0020694335301717124
sim_compute_performance-ego0_median0.0020694335301717124
sim_compute_performance-ego0_min0.0020694335301717124
sim_compute_sim_state_max0.008940577507019043
sim_compute_sim_state_mean0.008940577507019043
sim_compute_sim_state_median0.008940577507019043
sim_compute_sim_state_min0.008940577507019043
sim_render-ego0_max0.003929495811462402
sim_render-ego0_mean0.003929495811462402
sim_render-ego0_median0.003929495811462402
sim_render-ego0_min0.003929495811462402
simulation-passed1
step_physics_max0.12325105567773184
step_physics_mean0.12325105567773184
step_physics_median0.12325105567773184
step_physics_min0.12325105567773184
survival_time_max2.3499999999999996
survival_time_mean2.3499999999999996
survival_time_min2.3499999999999996
No reset possible
6619313998Frank (Chude) QianΒ πŸ‡¨πŸ‡¦baseline-behavior-cloning New Datasetaido-LFP-sim-validationsim-1of4failednogpu-production-spot-3-020:00:42
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 589, in run_episode
    r: MsgReceived = await loop.run_in_executor(executor, f)
  File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 135, in write_topic_and_expect
    ob: MsgReceived = self.read_one(expect_topic=expect, timeout=timeout)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 218, in read_one
    msgs = read_reply(self.fpout, timeout=timeout, waiting_for=waiting_for, nickname=self.nickname,)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 304, in read_reply
    others = read_until_over(fpout, timeout=timeout, nickname=nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 331, in read_until_over
    raise RemoteNodeAborted(m)
zuper_nodes.structures.RemoteNodeAborted: External node "ego0" aborted:

error in ego0 |Exception while handling a message on topic "get_commands".
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 355, in loop
              ||     handle_message_node(parsed, receiver0, context0)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 531, in handle_message_node
              ||     call_if_fun_exists(agent, expect_fn, data=ob, context=context, timing=timing)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 84, in on_received_get_commands
              ||     linear, angular = self.compute_action(self.to_predictor)
              ||   File "solution.py", line 79, in compute_action
              ||     (linear, angular) = self.model.predict(observation)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/training.py", line 88, in _method_wrapper
              ||     return method(self, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/training.py", line 1268, in predict
              ||     tmp_batch_outputs = predict_function(iterator)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 580, in __call__
              ||     result = self._call(*args, **kwds)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 650, in _call
              ||     return self._concrete_stateful_fn._filtered_call(canon_args, canon_kwds)  # pylint: disable=protected-access
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1661, in _filtered_call
              ||     return self._call_flat(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1745, in _call_flat
              ||     return self._build_call_outputs(self._inference_function.call(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 593, in call
              ||     outputs = execute.execute(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
              ||     tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
              || tensorflow.python.framework.errors_impl.UnknownError:  Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node FrankNet/conv2d_5/Conv2D (defined at solution.py:79) ]] [Op:__inference_predict_function_790]
              ||
              || Errors may have originated from an input operation.
              || Input Source operations connected to node FrankNet/conv2d_5/Conv2D:
              ||  FrankNet/lambda_1/truediv (defined at /submission/frankModel.py:42)
              ||
              || Function call stack:
              || predict_function
              ||
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 312, in main
    length_s = await run_episode(
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 593, in run_episode
    raise dc.InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Trouble with communication to the agent.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6618213998Frank (Chude) QianΒ πŸ‡¨πŸ‡¦baseline-behavior-cloning New Datasetaido-LFP-sim-validationsim-1of4failednogpu-production-spot-3-020:01:06
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 589, in run_episode
    r: MsgReceived = await loop.run_in_executor(executor, f)
  File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 135, in write_topic_and_expect
    ob: MsgReceived = self.read_one(expect_topic=expect, timeout=timeout)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 218, in read_one
    msgs = read_reply(self.fpout, timeout=timeout, waiting_for=waiting_for, nickname=self.nickname,)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 304, in read_reply
    others = read_until_over(fpout, timeout=timeout, nickname=nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 331, in read_until_over
    raise RemoteNodeAborted(m)
zuper_nodes.structures.RemoteNodeAborted: External node "ego0" aborted:

error in ego0 |Exception while handling a message on topic "get_commands".
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 355, in loop
              ||     handle_message_node(parsed, receiver0, context0)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 531, in handle_message_node
              ||     call_if_fun_exists(agent, expect_fn, data=ob, context=context, timing=timing)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 84, in on_received_get_commands
              ||     linear, angular = self.compute_action(self.to_predictor)
              ||   File "solution.py", line 79, in compute_action
              ||     (linear, angular) = self.model.predict(observation)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/training.py", line 88, in _method_wrapper
              ||     return method(self, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/training.py", line 1268, in predict
              ||     tmp_batch_outputs = predict_function(iterator)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 580, in __call__
              ||     result = self._call(*args, **kwds)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 650, in _call
              ||     return self._concrete_stateful_fn._filtered_call(canon_args, canon_kwds)  # pylint: disable=protected-access
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1661, in _filtered_call
              ||     return self._call_flat(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1745, in _call_flat
              ||     return self._build_call_outputs(self._inference_function.call(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 593, in call
              ||     outputs = execute.execute(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
              ||     tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
              || tensorflow.python.framework.errors_impl.UnknownError:  Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node FrankNet/conv2d_5/Conv2D (defined at solution.py:79) ]] [Op:__inference_predict_function_790]
              ||
              || Errors may have originated from an input operation.
              || Input Source operations connected to node FrankNet/conv2d_5/Conv2D:
              ||  FrankNet/lambda_1/truediv (defined at /submission/frankModel.py:42)
              ||
              || Function call stack:
              || predict_function
              ||
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 312, in main
    length_s = await run_episode(
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 593, in run_episode
    raise dc.InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Trouble with communication to the agent.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6616813992Frank (Chude) QianΒ πŸ‡¨πŸ‡¦CBC Net - MixTraining - Expert LF Human LFPaido-LFP-sim-validationsim-0of4successnogpu-production-spot-3-020:02:39
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median8.349999999999984
in-drivable-lane_median3.7999999999999954
driven_lanedir_consec_median1.7616282471558802
deviation-center-line_median0.46593932932540777


other stats
agent_compute-ego0_max0.05721008351870945
agent_compute-ego0_mean0.05721008351870945
agent_compute-ego0_median0.05721008351870945
agent_compute-ego0_min0.05721008351870945
complete-iteration_max0.33440216808092027
complete-iteration_mean0.33440216808092027
complete-iteration_median0.33440216808092027
complete-iteration_min0.33440216808092027
deviation-center-line_max0.46593932932540777
deviation-center-line_mean0.46593932932540777
deviation-center-line_min0.46593932932540777
deviation-heading_max1.782360917011877
deviation-heading_mean1.782360917011877
deviation-heading_median1.782360917011877
deviation-heading_min1.782360917011877
driven_any_max3.507273772056232
driven_any_mean3.507273772056232
driven_any_median3.507273772056232
driven_any_min3.507273772056232
driven_lanedir_consec_max1.7616282471558802
driven_lanedir_consec_mean1.7616282471558802
driven_lanedir_consec_min1.7616282471558802
driven_lanedir_max1.7616282471558802
driven_lanedir_mean1.7616282471558802
driven_lanedir_median1.7616282471558802
driven_lanedir_min1.7616282471558802
get_duckie_state_max0.0223121444384257
get_duckie_state_mean0.0223121444384257
get_duckie_state_median0.0223121444384257
get_duckie_state_min0.0223121444384257
get_robot_state_max0.003943436202548799
get_robot_state_mean0.003943436202548799
get_robot_state_median0.003943436202548799
get_robot_state_min0.003943436202548799
get_state_dump_max0.008714399167469569
get_state_dump_mean0.008714399167469569
get_state_dump_median0.008714399167469569
get_state_dump_min0.008714399167469569
get_ui_image_max0.041869458698091055
get_ui_image_mean0.041869458698091055
get_ui_image_median0.041869458698091055
get_ui_image_min0.041869458698091055
in-drivable-lane_max3.7999999999999954
in-drivable-lane_mean3.7999999999999954
in-drivable-lane_min3.7999999999999954
per-episodes
details{"LFP-norm-zigzag-000-ego0": {"driven_any": 3.507273772056232, "get_ui_image": 0.041869458698091055, "step_physics": 0.17956942319869995, "survival_time": 8.349999999999984, "driven_lanedir": 1.7616282471558802, "get_state_dump": 0.008714399167469569, "get_robot_state": 0.003943436202548799, "sim_render-ego0": 0.004182330199650356, "get_duckie_state": 0.0223121444384257, "in-drivable-lane": 3.7999999999999954, "deviation-heading": 1.782360917011877, "agent_compute-ego0": 0.05721008351870945, "complete-iteration": 0.33440216808092027, "set_robot_commands": 0.0025551787444523404, "deviation-center-line": 0.46593932932540777, "driven_lanedir_consec": 1.7616282471558802, "sim_compute_sim_state": 0.01172714006333124, "sim_compute_performance-ego0": 0.0022036688668387277}}
set_robot_commands_max0.0025551787444523404
set_robot_commands_mean0.0025551787444523404
set_robot_commands_median0.0025551787444523404
set_robot_commands_min0.0025551787444523404
sim_compute_performance-ego0_max0.0022036688668387277
sim_compute_performance-ego0_mean0.0022036688668387277
sim_compute_performance-ego0_median0.0022036688668387277
sim_compute_performance-ego0_min0.0022036688668387277
sim_compute_sim_state_max0.01172714006333124
sim_compute_sim_state_mean0.01172714006333124
sim_compute_sim_state_median0.01172714006333124
sim_compute_sim_state_min0.01172714006333124
sim_render-ego0_max0.004182330199650356
sim_render-ego0_mean0.004182330199650356
sim_render-ego0_median0.004182330199650356
sim_render-ego0_min0.004182330199650356
simulation-passed1
step_physics_max0.17956942319869995
step_physics_mean0.17956942319869995
step_physics_median0.17956942319869995
step_physics_min0.17956942319869995
survival_time_max8.349999999999984
survival_time_mean8.349999999999984
survival_time_min8.349999999999984
No reset possible
6616013996Frank (Chude) QianΒ πŸ‡¨πŸ‡¦baseline-behavior-cloningaido-LFP-sim-validationsim-0of4failednogpu-production-spot-3-020:00:50
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 589, in run_episode
    r: MsgReceived = await loop.run_in_executor(executor, f)
  File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 135, in write_topic_and_expect
    ob: MsgReceived = self.read_one(expect_topic=expect, timeout=timeout)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 218, in read_one
    msgs = read_reply(self.fpout, timeout=timeout, waiting_for=waiting_for, nickname=self.nickname,)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 304, in read_reply
    others = read_until_over(fpout, timeout=timeout, nickname=nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 331, in read_until_over
    raise RemoteNodeAborted(m)
zuper_nodes.structures.RemoteNodeAborted: External node "ego0" aborted:

error in ego0 |Exception while handling a message on topic "get_commands".
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 355, in loop
              ||     handle_message_node(parsed, receiver0, context0)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 531, in handle_message_node
              ||     call_if_fun_exists(agent, expect_fn, data=ob, context=context, timing=timing)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 84, in on_received_get_commands
              ||     linear, angular = self.compute_action(self.to_predictor)
              ||   File "solution.py", line 79, in compute_action
              ||     (linear, angular) = self.model.predict(observation)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/training.py", line 88, in _method_wrapper
              ||     return method(self, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/training.py", line 1268, in predict
              ||     tmp_batch_outputs = predict_function(iterator)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 580, in __call__
              ||     result = self._call(*args, **kwds)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 650, in _call
              ||     return self._concrete_stateful_fn._filtered_call(canon_args, canon_kwds)  # pylint: disable=protected-access
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1661, in _filtered_call
              ||     return self._call_flat(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1745, in _call_flat
              ||     return self._build_call_outputs(self._inference_function.call(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 593, in call
              ||     outputs = execute.execute(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
              ||     tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
              || tensorflow.python.framework.errors_impl.UnknownError:  Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node FrankNet/conv2d_5/Conv2D (defined at solution.py:79) ]] [Op:__inference_predict_function_790]
              ||
              || Errors may have originated from an input operation.
              || Input Source operations connected to node FrankNet/conv2d_5/Conv2D:
              ||  FrankNet/lambda_1/truediv (defined at /submission/frankModel.py:42)
              ||
              || Function call stack:
              || predict_function
              ||
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 312, in main
    length_s = await run_episode(
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 593, in run_episode
    raise dc.InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Trouble with communication to the agent.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6615013998Frank (Chude) QianΒ πŸ‡¨πŸ‡¦baseline-behavior-cloning New Datasetaido-LFP-sim-validationsim-0of4failednogpu-production-spot-3-020:00:50
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 589, in run_episode
    r: MsgReceived = await loop.run_in_executor(executor, f)
  File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 135, in write_topic_and_expect
    ob: MsgReceived = self.read_one(expect_topic=expect, timeout=timeout)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 218, in read_one
    msgs = read_reply(self.fpout, timeout=timeout, waiting_for=waiting_for, nickname=self.nickname,)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 304, in read_reply
    others = read_until_over(fpout, timeout=timeout, nickname=nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 331, in read_until_over
    raise RemoteNodeAborted(m)
zuper_nodes.structures.RemoteNodeAborted: External node "ego0" aborted:

error in ego0 |Exception while handling a message on topic "get_commands".
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 355, in loop
              ||     handle_message_node(parsed, receiver0, context0)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 531, in handle_message_node
              ||     call_if_fun_exists(agent, expect_fn, data=ob, context=context, timing=timing)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 84, in on_received_get_commands
              ||     linear, angular = self.compute_action(self.to_predictor)
              ||   File "solution.py", line 79, in compute_action
              ||     (linear, angular) = self.model.predict(observation)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/training.py", line 88, in _method_wrapper
              ||     return method(self, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/training.py", line 1268, in predict
              ||     tmp_batch_outputs = predict_function(iterator)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 580, in __call__
              ||     result = self._call(*args, **kwds)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 650, in _call
              ||     return self._concrete_stateful_fn._filtered_call(canon_args, canon_kwds)  # pylint: disable=protected-access
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1661, in _filtered_call
              ||     return self._call_flat(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1745, in _call_flat
              ||     return self._build_call_outputs(self._inference_function.call(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 593, in call
              ||     outputs = execute.execute(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
              ||     tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
              || tensorflow.python.framework.errors_impl.UnknownError:  Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node FrankNet/conv2d_5/Conv2D (defined at solution.py:79) ]] [Op:__inference_predict_function_790]
              ||
              || Errors may have originated from an input operation.
              || Input Source operations connected to node FrankNet/conv2d_5/Conv2D:
              ||  FrankNet/lambda_1/truediv (defined at /submission/frankModel.py:42)
              ||
              || Function call stack:
              || predict_function
              ||
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 312, in main
    length_s = await run_episode(
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 593, in run_episode
    raise dc.InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Trouble with communication to the agent.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6614413513AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFV-sim-validationsim-1of4failednogpu-production-spot-3-020:01:23
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 275, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6613214014YU CHENCBC Net v2 test - APR 6 anomaly + mar 28 bcaido-LFP-sim-validationsim-1of4successnogpu-production-spot-3-020:02:00
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median6.549999999999985
in-drivable-lane_median3.1499999999999897
driven_lanedir_consec_median1.2285669456213737
deviation-center-line_median0.2367238741049216


other stats
agent_compute-ego0_max0.09649059627995346
agent_compute-ego0_mean0.09649059627995346
agent_compute-ego0_median0.09649059627995346
agent_compute-ego0_min0.09649059627995346
complete-iteration_max0.27960949955564557
complete-iteration_mean0.27960949955564557
complete-iteration_median0.27960949955564557
complete-iteration_min0.27960949955564557
deviation-center-line_max0.2367238741049216
deviation-center-line_mean0.2367238741049216
deviation-center-line_min0.2367238741049216
deviation-heading_max1.4441774109308514
deviation-heading_mean1.4441774109308514
deviation-heading_median1.4441774109308514
deviation-heading_min1.4441774109308514
driven_any_max2.0661508525224384
driven_any_mean2.0661508525224384
driven_any_median2.0661508525224384
driven_any_min2.0661508525224384
driven_lanedir_consec_max1.2285669456213737
driven_lanedir_consec_mean1.2285669456213737
driven_lanedir_consec_min1.2285669456213737
driven_lanedir_max1.2285669456213737
driven_lanedir_mean1.2285669456213737
driven_lanedir_median1.2285669456213737
driven_lanedir_min1.2285669456213737
get_duckie_state_max0.0048177242279052734
get_duckie_state_mean0.0048177242279052734
get_duckie_state_median0.0048177242279052734
get_duckie_state_min0.0048177242279052734
get_robot_state_max0.004234398856307521
get_robot_state_mean0.004234398856307521
get_robot_state_median0.004234398856307521
get_robot_state_min0.004234398856307521
get_state_dump_max0.006221037922483502
get_state_dump_mean0.006221037922483502
get_state_dump_median0.006221037922483502
get_state_dump_min0.006221037922483502
get_ui_image_max0.02918094035350915
get_ui_image_mean0.02918094035350915
get_ui_image_median0.02918094035350915
get_ui_image_min0.02918094035350915
in-drivable-lane_max3.1499999999999897
in-drivable-lane_mean3.1499999999999897
in-drivable-lane_min3.1499999999999897
per-episodes
details{"LFP-norm-small_loop-000-ego0": {"driven_any": 2.0661508525224384, "get_ui_image": 0.02918094035350915, "step_physics": 0.12255283377387306, "survival_time": 6.549999999999985, "driven_lanedir": 1.2285669456213737, "get_state_dump": 0.006221037922483502, "get_robot_state": 0.004234398856307521, "sim_render-ego0": 0.00434783191391916, "get_duckie_state": 0.0048177242279052734, "in-drivable-lane": 3.1499999999999897, "deviation-heading": 1.4441774109308514, "agent_compute-ego0": 0.09649059627995346, "complete-iteration": 0.27960949955564557, "set_robot_commands": 0.0027660835872996936, "deviation-center-line": 0.2367238741049216, "driven_lanedir_consec": 1.2285669456213737, "sim_compute_sim_state": 0.006594769882433342, "sim_compute_performance-ego0": 0.0022900447700962877}}
set_robot_commands_max0.0027660835872996936
set_robot_commands_mean0.0027660835872996936
set_robot_commands_median0.0027660835872996936
set_robot_commands_min0.0027660835872996936
sim_compute_performance-ego0_max0.0022900447700962877
sim_compute_performance-ego0_mean0.0022900447700962877
sim_compute_performance-ego0_median0.0022900447700962877
sim_compute_performance-ego0_min0.0022900447700962877
sim_compute_sim_state_max0.006594769882433342
sim_compute_sim_state_mean0.006594769882433342
sim_compute_sim_state_median0.006594769882433342
sim_compute_sim_state_min0.006594769882433342
sim_render-ego0_max0.00434783191391916
sim_render-ego0_mean0.00434783191391916
sim_render-ego0_median0.00434783191391916
sim_render-ego0_min0.00434783191391916
simulation-passed1
step_physics_max0.12255283377387306
step_physics_mean0.12255283377387306
step_physics_median0.12255283377387306
step_physics_min0.12255283377387306
survival_time_max6.549999999999985
survival_time_mean6.549999999999985
survival_time_min6.549999999999985
No reset possible
6612213570MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFP-sim-testingsim-1of4successnogpu-production-spot-3-020:02:17
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median3.049999999999997
in-drivable-lane_median0.0
driven_lanedir_consec_median0.9600281448107432
deviation-center-line_median0.15274297171280152


other stats
agent_compute-ego0_max0.049886930373407176
agent_compute-ego0_mean0.049886930373407176
agent_compute-ego0_median0.049886930373407176
agent_compute-ego0_min0.049886930373407176
complete-iteration_max0.23391862069406816
complete-iteration_mean0.23391862069406816
complete-iteration_median0.23391862069406816
complete-iteration_min0.23391862069406816
deviation-center-line_max0.15274297171280152
deviation-center-line_mean0.15274297171280152
deviation-center-line_min0.15274297171280152
deviation-heading_max0.630988404164939
deviation-heading_mean0.630988404164939
deviation-heading_median0.630988404164939
deviation-heading_min0.630988404164939
driven_any_max0.9765303383212154
driven_any_mean0.9765303383212154
driven_any_median0.9765303383212154
driven_any_min0.9765303383212154
driven_lanedir_consec_max0.9600281448107432
driven_lanedir_consec_mean0.9600281448107432
driven_lanedir_consec_min0.9600281448107432
driven_lanedir_max0.9600281448107432
driven_lanedir_mean0.9600281448107432
driven_lanedir_median0.9600281448107432
driven_lanedir_min0.9600281448107432
get_duckie_state_max0.004924493451272288
get_duckie_state_mean0.004924493451272288
get_duckie_state_median0.004924493451272288
get_duckie_state_min0.004924493451272288
get_robot_state_max0.004296706568810248
get_robot_state_mean0.004296706568810248
get_robot_state_median0.004296706568810248
get_robot_state_min0.004296706568810248
get_state_dump_max0.006616157870138845
get_state_dump_mean0.006616157870138845
get_state_dump_median0.006616157870138845
get_state_dump_min0.006616157870138845
get_ui_image_max0.02965034592536188
get_ui_image_mean0.02965034592536188
get_ui_image_median0.02965034592536188
get_ui_image_min0.02965034592536188
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFP-norm-small_loop-000-ego0": {"driven_any": 0.9765303383212154, "get_ui_image": 0.02965034592536188, "step_physics": 0.12219445936141476, "survival_time": 3.049999999999997, "driven_lanedir": 0.9600281448107432, "get_state_dump": 0.006616157870138845, "get_robot_state": 0.004296706568810248, "sim_render-ego0": 0.00449184448488297, "get_duckie_state": 0.004924493451272288, "in-drivable-lane": 0.0, "deviation-heading": 0.630988404164939, "agent_compute-ego0": 0.049886930373407176, "complete-iteration": 0.23391862069406816, "set_robot_commands": 0.002712134392030778, "deviation-center-line": 0.15274297171280152, "driven_lanedir_consec": 0.9600281448107432, "sim_compute_sim_state": 0.006706814612111737, "sim_compute_performance-ego0": 0.0023280459065591137}}
set_robot_commands_max0.002712134392030778
set_robot_commands_mean0.002712134392030778
set_robot_commands_median0.002712134392030778
set_robot_commands_min0.002712134392030778
sim_compute_performance-ego0_max0.0023280459065591137
sim_compute_performance-ego0_mean0.0023280459065591137
sim_compute_performance-ego0_median0.0023280459065591137
sim_compute_performance-ego0_min0.0023280459065591137
sim_compute_sim_state_max0.006706814612111737
sim_compute_sim_state_mean0.006706814612111737
sim_compute_sim_state_median0.006706814612111737
sim_compute_sim_state_min0.006706814612111737
sim_render-ego0_max0.00449184448488297
sim_render-ego0_mean0.00449184448488297
sim_render-ego0_median0.00449184448488297
sim_render-ego0_min0.00449184448488297
simulation-passed1
step_physics_max0.12219445936141476
step_physics_mean0.12219445936141476
step_physics_median0.12219445936141476
step_physics_min0.12219445936141476
survival_time_max3.049999999999997
survival_time_mean3.049999999999997
survival_time_min3.049999999999997
No reset possible
6611913504AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LF-sim-testingsim-1of4failednogpu-production-spot-3-020:00:40
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 275, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6611513504AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LF-sim-testingsim-1of4failednogpu-production-spot-3-020:00:40
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 275, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6610613513AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFV-sim-validationsim-0of4failednogpu-production-spot-3-020:01:16
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 275, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6609913513AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFV-sim-validationsim-0of4failednogpu-production-spot-3-020:01:14
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 275, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6605313572MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFV-sim-validationsim-1of4successnogpu-production-spot-3-020:12:39
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median22.400000000000183
in-drivable-lane_median0.0
driven_lanedir_consec_median9.695785608023472
deviation-center-line_median1.5747170826797563


other stats
agent_compute-ego0_max0.0494210449252734
agent_compute-ego0_mean0.0494210449252734
agent_compute-ego0_median0.0494210449252734
agent_compute-ego0_min0.0494210449252734
agent_compute-npc0_max0.05103454133184556
agent_compute-npc0_mean0.05103454133184556
agent_compute-npc0_median0.05103454133184556
agent_compute-npc0_min0.05103454133184556
agent_compute-npc1_max0.050224744928440694
agent_compute-npc1_mean0.050224744928440694
agent_compute-npc1_median0.050224744928440694
agent_compute-npc1_min0.050224744928440694
agent_compute-npc2_max0.04724123536345688
agent_compute-npc2_mean0.04724123536345688
agent_compute-npc2_median0.04724123536345688
agent_compute-npc2_min0.04724123536345688
agent_compute-npc3_max0.05301727059158291
agent_compute-npc3_mean0.05301727059158291
agent_compute-npc3_median0.05301727059158291
agent_compute-npc3_min0.05301727059158291
complete-iteration_max1.051289494690757
complete-iteration_mean1.051289494690757
complete-iteration_median1.051289494690757
complete-iteration_min1.051289494690757
deviation-center-line_max1.5747170826797563
deviation-center-line_mean1.5747170826797563
deviation-center-line_min1.5747170826797563
deviation-heading_max5.520952477729313
deviation-heading_mean5.520952477729313
deviation-heading_median5.520952477729313
deviation-heading_min5.520952477729313
driven_any_max10.080849283334494
driven_any_mean10.080849283334494
driven_any_median10.080849283334494
driven_any_min10.080849283334494
driven_lanedir_consec_max9.695785608023472
driven_lanedir_consec_mean9.695785608023472
driven_lanedir_consec_min9.695785608023472
driven_lanedir_max9.695785608023472
driven_lanedir_mean9.695785608023472
driven_lanedir_median9.695785608023472
driven_lanedir_min9.695785608023472
get_duckie_state_max1.5664472346316466e-06
get_duckie_state_mean1.5664472346316466e-06
get_duckie_state_median1.5664472346316466e-06
get_duckie_state_min1.5664472346316466e-06
get_robot_state_max0.020462591026832903
get_robot_state_mean0.020462591026832903
get_robot_state_median0.020462591026832903
get_robot_state_min0.020462591026832903
get_state_dump_max0.012779368589609395
get_state_dump_mean0.012779368589609395
get_state_dump_median0.012779368589609395
get_state_dump_min0.012779368589609395
get_ui_image_max0.06459618198845063
get_ui_image_mean0.06459618198845063
get_ui_image_median0.06459618198845063
get_ui_image_min0.06459618198845063
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFV-norm-zigzag-000-ego0": {"driven_any": 10.080849283334494, "get_ui_image": 0.06459618198845063, "step_physics": 0.5823504903534208, "survival_time": 22.400000000000183, "driven_lanedir": 9.695785608023472, "get_state_dump": 0.012779368589609395, "get_robot_state": 0.020462591026832903, "sim_render-ego0": 0.004316448368845645, "sim_render-npc0": 0.00432187252426997, "sim_render-npc1": 0.004376995536957127, "sim_render-npc2": 0.0043953304035891936, "sim_render-npc3": 0.004382866793592151, "get_duckie_state": 1.5664472346316466e-06, "in-drivable-lane": 0.0, "deviation-heading": 5.520952477729313, "agent_compute-ego0": 0.0494210449252734, "agent_compute-npc0": 0.05103454133184556, "agent_compute-npc1": 0.050224744928440694, "agent_compute-npc2": 0.04724123536345688, "agent_compute-npc3": 0.05301727059158291, "complete-iteration": 1.051289494690757, "set_robot_commands": 0.002684747190411744, "deviation-center-line": 1.5747170826797563, "driven_lanedir_consec": 9.695785608023472, "sim_compute_sim_state": 0.07286735738571609, "sim_compute_performance-ego0": 0.002398163810339165, "sim_compute_performance-npc0": 0.0022854980222366435, "sim_compute_performance-npc1": 0.002327384290291631, "sim_compute_performance-npc2": 0.002379052092078534, "sim_compute_performance-npc3": 0.0023740474259136514}}
set_robot_commands_max0.002684747190411744
set_robot_commands_mean0.002684747190411744
set_robot_commands_median0.002684747190411744
set_robot_commands_min0.002684747190411744
sim_compute_performance-ego0_max0.002398163810339165
sim_compute_performance-ego0_mean0.002398163810339165
sim_compute_performance-ego0_median0.002398163810339165
sim_compute_performance-ego0_min0.002398163810339165
sim_compute_performance-npc0_max0.0022854980222366435
sim_compute_performance-npc0_mean0.0022854980222366435
sim_compute_performance-npc0_median0.0022854980222366435
sim_compute_performance-npc0_min0.0022854980222366435
sim_compute_performance-npc1_max0.002327384290291631
sim_compute_performance-npc1_mean0.002327384290291631
sim_compute_performance-npc1_median0.002327384290291631
sim_compute_performance-npc1_min0.002327384290291631
sim_compute_performance-npc2_max0.002379052092078534
sim_compute_performance-npc2_mean0.002379052092078534
sim_compute_performance-npc2_median0.002379052092078534
sim_compute_performance-npc2_min0.002379052092078534
sim_compute_performance-npc3_max0.0023740474259136514
sim_compute_performance-npc3_mean0.0023740474259136514
sim_compute_performance-npc3_median0.0023740474259136514
sim_compute_performance-npc3_min0.0023740474259136514
sim_compute_sim_state_max0.07286735738571609
sim_compute_sim_state_mean0.07286735738571609
sim_compute_sim_state_median0.07286735738571609
sim_compute_sim_state_min0.07286735738571609
sim_render-ego0_max0.004316448368845645
sim_render-ego0_mean0.004316448368845645
sim_render-ego0_median0.004316448368845645
sim_render-ego0_min0.004316448368845645
sim_render-npc0_max0.00432187252426997
sim_render-npc0_mean0.00432187252426997
sim_render-npc0_median0.00432187252426997
sim_render-npc0_min0.00432187252426997
sim_render-npc1_max0.004376995536957127
sim_render-npc1_mean0.004376995536957127
sim_render-npc1_median0.004376995536957127
sim_render-npc1_min0.004376995536957127
sim_render-npc2_max0.0043953304035891936
sim_render-npc2_mean0.0043953304035891936
sim_render-npc2_median0.0043953304035891936
sim_render-npc2_min0.0043953304035891936
sim_render-npc3_max0.004382866793592151
sim_render-npc3_mean0.004382866793592151
sim_render-npc3_median0.004382866793592151
sim_render-npc3_min0.004382866793592151
simulation-passed1
step_physics_max0.5823504903534208
step_physics_mean0.5823504903534208
step_physics_median0.5823504903534208
step_physics_min0.5823504903534208
survival_time_max22.400000000000183
survival_time_mean22.400000000000183
survival_time_min22.400000000000183
No reset possible
6602413945YU CHENCBC Net v2 test - added APR 1 2 times anomaly + mar 28 bc_v1aido-LF-sim-validationsim-3of4successnogpu-production-spot-3-020:12:30
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median18.296552861312065
survival_time_median59.99999999999873
deviation-center-line_median3.4181774509907585
in-drivable-lane_median10.24999999999974


other stats
agent_compute-ego0_max0.09604292844951798
agent_compute-ego0_mean0.09604292844951798
agent_compute-ego0_median0.09604292844951798
agent_compute-ego0_min0.09604292844951798
complete-iteration_max0.35673799721227895
complete-iteration_mean0.35673799721227895
complete-iteration_median0.35673799721227895
complete-iteration_min0.35673799721227895
deviation-center-line_max3.4181774509907585
deviation-center-line_mean3.4181774509907585
deviation-center-line_min3.4181774509907585
deviation-heading_max14.079141933875848
deviation-heading_mean14.079141933875848
deviation-heading_median14.079141933875848
deviation-heading_min14.079141933875848
driven_any_max23.6361812803662
driven_any_mean23.6361812803662
driven_any_median23.6361812803662
driven_any_min23.6361812803662
driven_lanedir_consec_max18.296552861312065
driven_lanedir_consec_mean18.296552861312065
driven_lanedir_consec_min18.296552861312065
driven_lanedir_max18.296552861312065
driven_lanedir_mean18.296552861312065
driven_lanedir_median18.296552861312065
driven_lanedir_min18.296552861312065
get_duckie_state_max1.3407819177784789e-06
get_duckie_state_mean1.3407819177784789e-06
get_duckie_state_median1.3407819177784789e-06
get_duckie_state_min1.3407819177784789e-06
get_robot_state_max0.0040609360138244375
get_robot_state_mean0.0040609360138244375
get_robot_state_median0.0040609360138244375
get_robot_state_min0.0040609360138244375
get_state_dump_max0.005014554348516028
get_state_dump_mean0.005014554348516028
get_state_dump_median0.005014554348516028
get_state_dump_min0.005014554348516028
get_ui_image_max0.039417022074589025
get_ui_image_mean0.039417022074589025
get_ui_image_median0.039417022074589025
get_ui_image_min0.039417022074589025
in-drivable-lane_max10.24999999999974
in-drivable-lane_mean10.24999999999974
in-drivable-lane_min10.24999999999974
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 23.6361812803662, "get_ui_image": 0.039417022074589025, "step_physics": 0.1884076247902139, "survival_time": 59.99999999999873, "driven_lanedir": 18.296552861312065, "get_state_dump": 0.005014554348516028, "get_robot_state": 0.0040609360138244375, "sim_render-ego0": 0.0041825177766798335, "get_duckie_state": 1.3407819177784789e-06, "in-drivable-lane": 10.24999999999974, "deviation-heading": 14.079141933875848, "agent_compute-ego0": 0.09604292844951798, "complete-iteration": 0.35673799721227895, "set_robot_commands": 0.0027016763583905096, "deviation-center-line": 3.4181774509907585, "driven_lanedir_consec": 18.296552861312065, "sim_compute_sim_state": 0.01453966403583206, "sim_compute_performance-ego0": 0.0022706310516789393}}
set_robot_commands_max0.0027016763583905096
set_robot_commands_mean0.0027016763583905096
set_robot_commands_median0.0027016763583905096
set_robot_commands_min0.0027016763583905096
sim_compute_performance-ego0_max0.0022706310516789393
sim_compute_performance-ego0_mean0.0022706310516789393
sim_compute_performance-ego0_median0.0022706310516789393
sim_compute_performance-ego0_min0.0022706310516789393
sim_compute_sim_state_max0.01453966403583206
sim_compute_sim_state_mean0.01453966403583206
sim_compute_sim_state_median0.01453966403583206
sim_compute_sim_state_min0.01453966403583206
sim_render-ego0_max0.0041825177766798335
sim_render-ego0_mean0.0041825177766798335
sim_render-ego0_median0.0041825177766798335
sim_render-ego0_min0.0041825177766798335
simulation-passed1
step_physics_max0.1884076247902139
step_physics_mean0.1884076247902139
step_physics_median0.1884076247902139
step_physics_min0.1884076247902139
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6595514031YU CHENCBC V2, mar28 bc, mar31_apr6 anomaly aido-LF-sim-validationsim-3of4successnogpu-production-spot-3-020:12:43
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median14.63611306762292
survival_time_median59.99999999999873
deviation-center-line_median3.23109671215674
in-drivable-lane_median19.849999999999685


other stats
agent_compute-ego0_max0.09570794359631184
agent_compute-ego0_mean0.09570794359631184
agent_compute-ego0_median0.09570794359631184
agent_compute-ego0_min0.09570794359631184
complete-iteration_max0.36491691679085025
complete-iteration_mean0.36491691679085025
complete-iteration_median0.36491691679085025
complete-iteration_min0.36491691679085025
deviation-center-line_max3.23109671215674
deviation-center-line_mean3.23109671215674
deviation-center-line_min3.23109671215674
deviation-heading_max14.32036281081134
deviation-heading_mean14.32036281081134
deviation-heading_median14.32036281081134
deviation-heading_min14.32036281081134
driven_any_max24.15141467970167
driven_any_mean24.15141467970167
driven_any_median24.15141467970167
driven_any_min24.15141467970167
driven_lanedir_consec_max14.63611306762292
driven_lanedir_consec_mean14.63611306762292
driven_lanedir_consec_min14.63611306762292
driven_lanedir_max14.63611306762292
driven_lanedir_mean14.63611306762292
driven_lanedir_median14.63611306762292
driven_lanedir_min14.63611306762292
get_duckie_state_max1.4755747697434755e-06
get_duckie_state_mean1.4755747697434755e-06
get_duckie_state_median1.4755747697434755e-06
get_duckie_state_min1.4755747697434755e-06
get_robot_state_max0.004153711412669618
get_robot_state_mean0.004153711412669618
get_robot_state_median0.004153711412669618
get_robot_state_min0.004153711412669618
get_state_dump_max0.005061426527990489
get_state_dump_mean0.005061426527990489
get_state_dump_median0.005061426527990489
get_state_dump_min0.005061426527990489
get_ui_image_max0.039703272661499735
get_ui_image_mean0.039703272661499735
get_ui_image_median0.039703272661499735
get_ui_image_min0.039703272661499735
in-drivable-lane_max19.849999999999685
in-drivable-lane_mean19.849999999999685
in-drivable-lane_min19.849999999999685
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 24.15141467970167, "get_ui_image": 0.039703272661499735, "step_physics": 0.1958209451886637, "survival_time": 59.99999999999873, "driven_lanedir": 14.63611306762292, "get_state_dump": 0.005061426527990489, "get_robot_state": 0.004153711412669618, "sim_render-ego0": 0.004289793233688824, "get_duckie_state": 1.4755747697434755e-06, "in-drivable-lane": 19.849999999999685, "deviation-heading": 14.32036281081134, "agent_compute-ego0": 0.09570794359631184, "complete-iteration": 0.36491691679085025, "set_robot_commands": 0.002685398980044604, "deviation-center-line": 3.23109671215674, "driven_lanedir_consec": 14.63611306762292, "sim_compute_sim_state": 0.015088645941411127, "sim_compute_performance-ego0": 0.0023107244807615765}}
set_robot_commands_max0.002685398980044604
set_robot_commands_mean0.002685398980044604
set_robot_commands_median0.002685398980044604
set_robot_commands_min0.002685398980044604
sim_compute_performance-ego0_max0.0023107244807615765
sim_compute_performance-ego0_mean0.0023107244807615765
sim_compute_performance-ego0_median0.0023107244807615765
sim_compute_performance-ego0_min0.0023107244807615765
sim_compute_sim_state_max0.015088645941411127
sim_compute_sim_state_mean0.015088645941411127
sim_compute_sim_state_median0.015088645941411127
sim_compute_sim_state_min0.015088645941411127
sim_render-ego0_max0.004289793233688824
sim_render-ego0_mean0.004289793233688824
sim_render-ego0_median0.004289793233688824
sim_render-ego0_min0.004289793233688824
simulation-passed1
step_physics_max0.1958209451886637
step_physics_mean0.1958209451886637
step_physics_median0.1958209451886637
step_physics_min0.1958209451886637
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6592414035YU CHENCBC V2 non dropout comparsion, mar28_apr6 bc, mar31_apr6 anomaly aido-LF-sim-validationsim-3of4successnogpu-production-spot-3-020:05:23
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median1.7392390261183333
survival_time_median14.250000000000068
deviation-center-line_median0.4716575366528647
in-drivable-lane_median9.15000000000005


other stats
agent_compute-ego0_max0.09826293691888556
agent_compute-ego0_mean0.09826293691888556
agent_compute-ego0_median0.09826293691888556
agent_compute-ego0_min0.09826293691888556
complete-iteration_max0.31842401227750977
complete-iteration_mean0.31842401227750977
complete-iteration_median0.31842401227750977
complete-iteration_min0.31842401227750977
deviation-center-line_max0.4716575366528647
deviation-center-line_mean0.4716575366528647
deviation-center-line_min0.4716575366528647
deviation-heading_max1.6110909705888754
deviation-heading_mean1.6110909705888754
deviation-heading_median1.6110909705888754
deviation-heading_min1.6110909705888754
driven_any_max4.653099617928902
driven_any_mean4.653099617928902
driven_any_median4.653099617928902
driven_any_min4.653099617928902
driven_lanedir_consec_max1.7392390261183333
driven_lanedir_consec_mean1.7392390261183333
driven_lanedir_consec_min1.7392390261183333
driven_lanedir_max1.7392390261183333
driven_lanedir_mean1.7392390261183333
driven_lanedir_median1.7392390261183333
driven_lanedir_min1.7392390261183333
get_duckie_state_max2.099917485163762e-06
get_duckie_state_mean2.099917485163762e-06
get_duckie_state_median2.099917485163762e-06
get_duckie_state_min2.099917485163762e-06
get_robot_state_max0.004027895994119711
get_robot_state_mean0.004027895994119711
get_robot_state_median0.004027895994119711
get_robot_state_min0.004027895994119711
get_state_dump_max0.005067529378237424
get_state_dump_mean0.005067529378237424
get_state_dump_median0.005067529378237424
get_state_dump_min0.005067529378237424
get_ui_image_max0.03997233697584459
get_ui_image_mean0.03997233697584459
get_ui_image_median0.03997233697584459
get_ui_image_min0.03997233697584459
in-drivable-lane_max9.15000000000005
in-drivable-lane_mean9.15000000000005
in-drivable-lane_min9.15000000000005
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 4.653099617928902, "get_ui_image": 0.03997233697584459, "step_physics": 0.15051036031096132, "survival_time": 14.250000000000068, "driven_lanedir": 1.7392390261183333, "get_state_dump": 0.005067529378237424, "get_robot_state": 0.004027895994119711, "sim_render-ego0": 0.004158405990867348, "get_duckie_state": 2.099917485163762e-06, "in-drivable-lane": 9.15000000000005, "deviation-heading": 1.6110909705888754, "agent_compute-ego0": 0.09826293691888556, "complete-iteration": 0.31842401227750977, "set_robot_commands": 0.0025091346327241484, "deviation-center-line": 0.4716575366528647, "driven_lanedir_consec": 1.7392390261183333, "sim_compute_sim_state": 0.011593431025951891, "sim_compute_performance-ego0": 0.0022273272067516833}}
set_robot_commands_max0.0025091346327241484
set_robot_commands_mean0.0025091346327241484
set_robot_commands_median0.0025091346327241484
set_robot_commands_min0.0025091346327241484
sim_compute_performance-ego0_max0.0022273272067516833
sim_compute_performance-ego0_mean0.0022273272067516833
sim_compute_performance-ego0_median0.0022273272067516833
sim_compute_performance-ego0_min0.0022273272067516833
sim_compute_sim_state_max0.011593431025951891
sim_compute_sim_state_mean0.011593431025951891
sim_compute_sim_state_median0.011593431025951891
sim_compute_sim_state_min0.011593431025951891
sim_render-ego0_max0.004158405990867348
sim_render-ego0_mean0.004158405990867348
sim_render-ego0_median0.004158405990867348
sim_render-ego0_min0.004158405990867348
simulation-passed1
step_physics_max0.15051036031096132
step_physics_mean0.15051036031096132
step_physics_median0.15051036031096132
step_physics_min0.15051036031096132
survival_time_max14.250000000000068
survival_time_mean14.250000000000068
survival_time_min14.250000000000068
No reset possible
6589513587Andras Beres202-1aido-LFV-sim-validationsim-3of4successnogpu-production-spot-3-020:07:21
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median7.199999999999982
in-drivable-lane_median0.0
driven_lanedir_consec_median3.0009305472871044
deviation-center-line_median0.4214937852680342


other stats
agent_compute-ego0_max0.019020503142784383
agent_compute-ego0_mean0.019020503142784383
agent_compute-ego0_median0.019020503142784383
agent_compute-ego0_min0.019020503142784383
agent_compute-npc0_max0.04398041100337587
agent_compute-npc0_mean0.04398041100337587
agent_compute-npc0_median0.04398041100337587
agent_compute-npc0_min0.04398041100337587
agent_compute-npc1_max0.04483553623331004
agent_compute-npc1_mean0.04483553623331004
agent_compute-npc1_median0.04483553623331004
agent_compute-npc1_min0.04483553623331004
agent_compute-npc2_max0.045252982501325936
agent_compute-npc2_mean0.045252982501325936
agent_compute-npc2_median0.045252982501325936
agent_compute-npc2_min0.045252982501325936
agent_compute-npc3_max0.045335592072585536
agent_compute-npc3_mean0.045335592072585536
agent_compute-npc3_median0.045335592072585536
agent_compute-npc3_min0.045335592072585536
complete-iteration_max0.8479273648097597
complete-iteration_mean0.8479273648097597
complete-iteration_median0.8479273648097597
complete-iteration_min0.8479273648097597
deviation-center-line_max0.4214937852680342
deviation-center-line_mean0.4214937852680342
deviation-center-line_min0.4214937852680342
deviation-heading_max1.050453632109368
deviation-heading_mean1.050453632109368
deviation-heading_median1.050453632109368
deviation-heading_min1.050453632109368
driven_any_max3.0442525484194354
driven_any_mean3.0442525484194354
driven_any_median3.0442525484194354
driven_any_min3.0442525484194354
driven_lanedir_consec_max3.0009305472871044
driven_lanedir_consec_mean3.0009305472871044
driven_lanedir_consec_min3.0009305472871044
driven_lanedir_max3.0009305472871044
driven_lanedir_mean3.0009305472871044
driven_lanedir_median3.0009305472871044
driven_lanedir_min3.0009305472871044
get_duckie_state_max1.874463311557112e-06
get_duckie_state_mean1.874463311557112e-06
get_duckie_state_median1.874463311557112e-06
get_duckie_state_min1.874463311557112e-06
get_robot_state_max0.020027693386735587
get_robot_state_mean0.020027693386735587
get_robot_state_median0.020027693386735587
get_robot_state_min0.020027693386735587
get_state_dump_max0.012792389968345905
get_state_dump_mean0.012792389968345905
get_state_dump_median0.012792389968345905
get_state_dump_min0.012792389968345905
get_ui_image_max0.057675447135136046
get_ui_image_mean0.057675447135136046
get_ui_image_median0.057675447135136046
get_ui_image_min0.057675447135136046
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFV-norm-techtrack-000-ego0": {"driven_any": 3.0442525484194354, "get_ui_image": 0.057675447135136046, "step_physics": 0.44889365886819776, "survival_time": 7.199999999999982, "driven_lanedir": 3.0009305472871044, "get_state_dump": 0.012792389968345905, "get_robot_state": 0.020027693386735587, "sim_render-ego0": 0.00422815125564049, "sim_render-npc0": 0.00430172558488517, "sim_render-npc1": 0.0042402497653303475, "sim_render-npc2": 0.004227799382703058, "sim_render-npc3": 0.004331054358646788, "get_duckie_state": 1.874463311557112e-06, "in-drivable-lane": 0.0, "deviation-heading": 1.050453632109368, "agent_compute-ego0": 0.019020503142784383, "agent_compute-npc0": 0.04398041100337587, "agent_compute-npc1": 0.04483553623331004, "agent_compute-npc2": 0.045252982501325936, "agent_compute-npc3": 0.045335592072585536, "complete-iteration": 0.8479273648097597, "set_robot_commands": 0.0026198173391407936, "deviation-center-line": 0.4214937852680342, "driven_lanedir_consec": 3.0009305472871044, "sim_compute_sim_state": 0.0630253528726512, "sim_compute_performance-ego0": 0.0023452857445026266, "sim_compute_performance-npc0": 0.002394462453907934, "sim_compute_performance-npc1": 0.0022740692927919583, "sim_compute_performance-npc2": 0.002293660722929856, "sim_compute_performance-npc3": 0.0023312881075102706}}
set_robot_commands_max0.0026198173391407936
set_robot_commands_mean0.0026198173391407936
set_robot_commands_median0.0026198173391407936
set_robot_commands_min0.0026198173391407936
sim_compute_performance-ego0_max0.0023452857445026266
sim_compute_performance-ego0_mean0.0023452857445026266
sim_compute_performance-ego0_median0.0023452857445026266
sim_compute_performance-ego0_min0.0023452857445026266
sim_compute_performance-npc0_max0.002394462453907934
sim_compute_performance-npc0_mean0.002394462453907934
sim_compute_performance-npc0_median0.002394462453907934
sim_compute_performance-npc0_min0.002394462453907934
sim_compute_performance-npc1_max0.0022740692927919583
sim_compute_performance-npc1_mean0.0022740692927919583
sim_compute_performance-npc1_median0.0022740692927919583
sim_compute_performance-npc1_min0.0022740692927919583
sim_compute_performance-npc2_max0.002293660722929856
sim_compute_performance-npc2_mean0.002293660722929856
sim_compute_performance-npc2_median0.002293660722929856
sim_compute_performance-npc2_min0.002293660722929856
sim_compute_performance-npc3_max0.0023312881075102706
sim_compute_performance-npc3_mean0.0023312881075102706
sim_compute_performance-npc3_median0.0023312881075102706
sim_compute_performance-npc3_min0.0023312881075102706
sim_compute_sim_state_max0.0630253528726512
sim_compute_sim_state_mean0.0630253528726512
sim_compute_sim_state_median0.0630253528726512
sim_compute_sim_state_min0.0630253528726512
sim_render-ego0_max0.00422815125564049
sim_render-ego0_mean0.00422815125564049
sim_render-ego0_median0.00422815125564049
sim_render-ego0_min0.00422815125564049
sim_render-npc0_max0.00430172558488517
sim_render-npc0_mean0.00430172558488517
sim_render-npc0_median0.00430172558488517
sim_render-npc0_min0.00430172558488517
sim_render-npc1_max0.0042402497653303475
sim_render-npc1_mean0.0042402497653303475
sim_render-npc1_median0.0042402497653303475
sim_render-npc1_min0.0042402497653303475
sim_render-npc2_max0.004227799382703058
sim_render-npc2_mean0.004227799382703058
sim_render-npc2_median0.004227799382703058
sim_render-npc2_min0.004227799382703058
sim_render-npc3_max0.004331054358646788
sim_render-npc3_mean0.004331054358646788
sim_render-npc3_median0.004331054358646788
sim_render-npc3_min0.004331054358646788
simulation-passed1
step_physics_max0.44889365886819776
step_physics_mean0.44889365886819776
step_physics_median0.44889365886819776
step_physics_min0.44889365886819776
survival_time_max7.199999999999982
survival_time_mean7.199999999999982
survival_time_min7.199999999999982
No reset possible