Duckietown Challenges Home Challenges Submissions

Evaluator 5105

ID5105
evaluatorgpu-production-spot-3-05
ownerI don't have one πŸ˜€
machinegpu-prod_3dddc8247a0e
processgpu-production-spot-3-05_3dddc8247a0e
version6.2.7
first heard
last heard
statusinactive
# evaluating
# success15 65880
# timeout
# failed8 66082
# error
# aborted1 66326
# host-error
arm0
x86_641
Mac0
gpu available1
Number of processors64
Processor frequency (MHz)0.0 GHz
Free % of processors99%
RAM total (MB)249.0 GB
RAM free (MB)238.9 GB
Disk (MB)969.3 GB
Disk available (MB)862.2 GB
Docker Hub
P11
P2
Cloud simulations1
PI Camera0
# Duckiebots0
Map 3x3 avaiable
Number of duckies
gpu cores
AIDO 2 Map LF public
AIDO 2 Map LF private
AIDO 2 Map LFV public
AIDO 2 Map LFV private
AIDO 2 Map LFVI public
AIDO 2 Map LFVI private
AIDO 3 Map LF public
AIDO 3 Map LF private
AIDO 3 Map LFV public
AIDO 3 Map LFV private
AIDO 3 Map LFVI public
AIDO 3 Map LFVI private
AIDO 5 Map large loop
ETU track
for 2021, map is ETH_small_inter
IPFS mountpoint /ipfs available
IPNS mountpoint /ipns available

Evaluator jobs

Job IDsubmissionuseruser labelchallengestepstatusup to dateevaluatordate starteddate completeddurationmessage
6635714120Andrea CensiΒ πŸ‡¨πŸ‡­exercises_braitenbergmooc-BV1417successyesgpu-production-spot-3-050:08:35
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
other stats
agent_compute-ego0_max0.01098277753075057
agent_compute-ego0_mean0.01098277753075057
agent_compute-ego0_median0.01098277753075057
agent_compute-ego0_min0.01098277753075057
complete-iteration_max0.2942411313291456
complete-iteration_mean0.2942411313291456
complete-iteration_median0.2942411313291456
complete-iteration_min0.2942411313291456
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
driven_any_max2.4965999999999684
driven_any_mean2.4965999999999684
driven_any_median2.4965999999999684
driven_any_min2.4965999999999684
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.12280784976566143
get_duckie_state_mean0.12280784976566143
get_duckie_state_median0.12280784976566143
get_duckie_state_min0.12280784976566143
get_robot_state_max0.003725107157258295
get_robot_state_mean0.003725107157258295
get_robot_state_median0.003725107157258295
get_robot_state_min0.003725107157258295
get_state_dump_max0.024302492097055046
get_state_dump_mean0.024302492097055046
get_state_dump_median0.024302492097055046
get_state_dump_min0.024302492097055046
get_ui_image_max0.03630521453794886
get_ui_image_mean0.03630521453794886
get_ui_image_median0.03630521453794886
get_ui_image_min0.03630521453794886
in-drivable-lane_max42.649999999999714
in-drivable-lane_mean42.649999999999714
in-drivable-lane_median42.649999999999714
in-drivable-lane_min42.649999999999714
per-episodes
details{"t08d60-ego0": {"driven_any": 2.4965999999999684, "get_ui_image": 0.03630521453794886, "step_physics": 0.0769101629770891, "survival_time": 42.649999999999714, "driven_lanedir": 0.0, "get_state_dump": 0.024302492097055046, "get_robot_state": 0.003725107157258295, "sim_render-ego0": 0.0037080560411725727, "get_duckie_state": 0.12280784976566143, "in-drivable-lane": 42.649999999999714, "deviation-heading": 0.0, "agent_compute-ego0": 0.01098277753075057, "complete-iteration": 0.2942411313291456, "set_robot_commands": 0.0021772018919504776, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.011309099141551962, "sim_compute_performance-ego0": 0.001929525469170242}}
set_robot_commands_max0.0021772018919504776
set_robot_commands_mean0.0021772018919504776
set_robot_commands_median0.0021772018919504776
set_robot_commands_min0.0021772018919504776
sim_compute_performance-ego0_max0.001929525469170242
sim_compute_performance-ego0_mean0.001929525469170242
sim_compute_performance-ego0_median0.001929525469170242
sim_compute_performance-ego0_min0.001929525469170242
sim_compute_sim_state_max0.011309099141551962
sim_compute_sim_state_mean0.011309099141551962
sim_compute_sim_state_median0.011309099141551962
sim_compute_sim_state_min0.011309099141551962
sim_render-ego0_max0.0037080560411725727
sim_render-ego0_mean0.0037080560411725727
sim_render-ego0_median0.0037080560411725727
sim_render-ego0_min0.0037080560411725727
simulation-passed1
step_physics_max0.0769101629770891
step_physics_mean0.0769101629770891
step_physics_median0.0769101629770891
step_physics_min0.0769101629770891
survival_time_max42.649999999999714
survival_time_mean42.649999999999714
survival_time_median42.649999999999714
survival_time_min42.649999999999714
No reset possible
6635314119Andrea CensiΒ πŸ‡¨πŸ‡­exercises_braitenbergmooc-BV1420failedyesgpu-production-spot-3-050:02:40
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 581, in run_episode
    agent_ci.write_topic_and_expect_zero("observations", obs_plus)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 304, in read_reply
    others = read_until_over(fpout, timeout=timeout, nickname=nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 331, in read_until_over
    raise RemoteNodeAborted(m)
zuper_nodes.structures.RemoteNodeAborted: External node "ego0" aborted:

error in ego0 |Exception while handling a message on topic "observations".
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "/agent/solution/agent.py", line 65, in on_received_observations
              ||     context.info("received first observations", data=data)
              || TypeError: info() got an unexpected keyword argument 'data'
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 355, in loop
              ||     handle_message_node(parsed, receiver0, context0)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 531, in handle_message_node
              ||     call_if_fun_exists(agent, expect_fn, data=ob, context=context, timing=timing)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 24, in call_if_fun_exists
              ||     raise ZTypeError(msg, f=f, args=kwargs, argspec=a) from e
              || zuper_commons.types.exceptions.ZTypeError: Cannot call function <bound method BraitenbergAgent.on_received_observations of <__main__.BraitenbergAgent object at 0x7fb4f69eb880>>.
              || β”‚       f: <bound method BraitenbergAgent.on_received_observations of <__main__.BraitenbergAgent object at 0x7fb4f69eb880>>
              || β”‚    args: dict[2]
              || β”‚          β”‚ data:
              || β”‚          β”‚ DB20Observations
              || β”‚          β”‚ β”‚ camera: JPGImage(jpg_data=57361 bytes b'\xff\xd8\xff\xe0\x00\x10JFIF')
              || β”‚          β”‚ β”‚ odometry:
              || β”‚          β”‚ β”‚ DB20Odometry
              || β”‚          β”‚ β”‚ β”‚ resolution_rad: 0.046542113386515455
              || β”‚          β”‚ β”‚ β”‚ axis_left_rad: 0.0
              || β”‚          β”‚ β”‚ β”‚ axis_right_rad: 0.0
              || β”‚          β”‚ context: <zuper_nodes_wrapper.wrapper.ConcreteContext object at 0x7fb4e43dd3d0>
              || β”‚ argspec: <class 'inspect.FullArgSpec'>[7]
              || β”‚          #0 [self, context, data]
              || β”‚          #1 None
              || β”‚          #2 None
              || β”‚          #3 None
              || β”‚          #4 []
              || β”‚          #5 None
              || β”‚          #6 dict[2]
              || β”‚             β”‚ context: <class 'zuper_nodes_wrapper.interface.Context'>
              || β”‚             β”‚ data:
              || β”‚             β”‚ dataclass aido_schemas.schemas.DB20Observations
              || β”‚             β”‚  field   camera : dataclass aido_schemas.protocol_simulator.JPGImage
              || β”‚             β”‚                    field jpg_data : bytes
              || β”‚             β”‚                           __doc__
              || β”‚             β”‚                                             An image in JPG format.
              || β”‚             β”‚
              || β”‚             β”‚                                             jpg_data
              || β”‚             β”‚  field odometry : dataclass aido_schemas.schemas.DB20Odometry
              || β”‚             β”‚                    field resolution_rad : float
              || β”‚             β”‚                    field  axis_left_rad : float
              || β”‚             β”‚                    field axis_right_rad : float
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 312, in main
    length_s = await run_episode(
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 593, in run_episode
    raise dc.InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Trouble with communication to the agent.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6632613798Nicholas Kostelniktemplate-randomaido-hello-sim-validation370abortednogpu-production-spot-3-050:00:22
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 261, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/create?tag=sha256%3Ab13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691&fromImage=docker.io%2Fnitaigao%2Faido-submissions

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 65, in docker_pull
    pulling = client.api.pull(repository=repository, tag=br.tag, stream=True, decode=True)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/image.py", line 415, in pull
    self._raise_for_status(response)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 263, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/local/lib/python3.8/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.ImageNotFound: 404 Client Error: Not Found ("pull access denied for nitaigao/aido-submissions, repository does not exist or may require 'docker login': denied: requested access to the resource is denied")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 38, in docker_pull_retry
    return docker_pull(client, image_name, quiet=quiet)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 84, in docker_pull
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: Cannot pull repo  docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691  tag  None

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 745, in get_cr
    cr = run_single(
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 944, in run_single
    docker_pull_retry(client, image, ntimes=4, wait=5)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 42, in docker_pull_retry
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: After trying 4 I still could not pull docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6631113912YU CHENCBC Net v2 - testaido-LFP-sim-validationsim-1of4successnogpu-production-spot-3-050:01:54
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median6.999999999999983
in-drivable-lane_median2.9999999999999893
driven_lanedir_consec_median1.2419325131045045
deviation-center-line_median0.3181369819909449


other stats
agent_compute-ego0_max0.08803133254355573
agent_compute-ego0_mean0.08803133254355573
agent_compute-ego0_median0.08803133254355573
agent_compute-ego0_min0.08803133254355573
complete-iteration_max0.2623189290364583
complete-iteration_mean0.2623189290364583
complete-iteration_median0.2623189290364583
complete-iteration_min0.2623189290364583
deviation-center-line_max0.3181369819909449
deviation-center-line_mean0.3181369819909449
deviation-center-line_min0.3181369819909449
deviation-heading_max1.60922816302621
deviation-heading_mean1.60922816302621
deviation-heading_median1.60922816302621
deviation-heading_min1.60922816302621
driven_any_max2.0337345493026606
driven_any_mean2.0337345493026606
driven_any_median2.0337345493026606
driven_any_min2.0337345493026606
driven_lanedir_consec_max1.2419325131045045
driven_lanedir_consec_mean1.2419325131045045
driven_lanedir_consec_min1.2419325131045045
driven_lanedir_max1.2419325131045045
driven_lanedir_mean1.2419325131045045
driven_lanedir_median1.2419325131045045
driven_lanedir_min1.2419325131045045
get_duckie_state_max0.004279562767515791
get_duckie_state_mean0.004279562767515791
get_duckie_state_median0.004279562767515791
get_duckie_state_min0.004279562767515791
get_robot_state_max0.0036827401911958737
get_robot_state_mean0.0036827401911958737
get_robot_state_median0.0036827401911958737
get_robot_state_min0.0036827401911958737
get_state_dump_max0.005541209633468736
get_state_dump_mean0.005541209633468736
get_state_dump_median0.005541209633468736
get_state_dump_min0.005541209633468736
get_ui_image_max0.02676638643792335
get_ui_image_mean0.02676638643792335
get_ui_image_median0.02676638643792335
get_ui_image_min0.02676638643792335
in-drivable-lane_max2.9999999999999893
in-drivable-lane_mean2.9999999999999893
in-drivable-lane_min2.9999999999999893
per-episodes
details{"LFP-norm-small_loop-000-ego0": {"driven_any": 2.0337345493026606, "get_ui_image": 0.02676638643792335, "step_physics": 0.11987884670284622, "survival_time": 6.999999999999983, "driven_lanedir": 1.2419325131045045, "get_state_dump": 0.005541209633468736, "get_robot_state": 0.0036827401911958737, "sim_render-ego0": 0.0037807200817351646, "get_duckie_state": 0.004279562767515791, "in-drivable-lane": 2.9999999999999893, "deviation-heading": 1.60922816302621, "agent_compute-ego0": 0.08803133254355573, "complete-iteration": 0.2623189290364583, "set_robot_commands": 0.0023401747358606216, "deviation-center-line": 0.3181369819909449, "driven_lanedir_consec": 1.2419325131045045, "sim_compute_sim_state": 0.005940373062242007, "sim_compute_performance-ego0": 0.001975678383035863}}
set_robot_commands_max0.0023401747358606216
set_robot_commands_mean0.0023401747358606216
set_robot_commands_median0.0023401747358606216
set_robot_commands_min0.0023401747358606216
sim_compute_performance-ego0_max0.001975678383035863
sim_compute_performance-ego0_mean0.001975678383035863
sim_compute_performance-ego0_median0.001975678383035863
sim_compute_performance-ego0_min0.001975678383035863
sim_compute_sim_state_max0.005940373062242007
sim_compute_sim_state_mean0.005940373062242007
sim_compute_sim_state_median0.005940373062242007
sim_compute_sim_state_min0.005940373062242007
sim_render-ego0_max0.0037807200817351646
sim_render-ego0_mean0.0037807200817351646
sim_render-ego0_median0.0037807200817351646
sim_render-ego0_min0.0037807200817351646
simulation-passed1
step_physics_max0.11987884670284622
step_physics_mean0.11987884670284622
step_physics_median0.11987884670284622
step_physics_min0.11987884670284622
survival_time_max6.999999999999983
survival_time_mean6.999999999999983
survival_time_min6.999999999999983
No reset possible
6629813912YU CHENCBC Net v2 - testaido-LFP-sim-validationsim-1of4successnogpu-production-spot-3-050:02:06
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median7.649999999999981
in-drivable-lane_median1.649999999999994
driven_lanedir_consec_median1.7677828520022092
deviation-center-line_median0.5191029476253077


other stats
agent_compute-ego0_max0.09442916783419522
agent_compute-ego0_mean0.09442916783419522
agent_compute-ego0_median0.09442916783419522
agent_compute-ego0_min0.09442916783419522
complete-iteration_max0.2800000658282986
complete-iteration_mean0.2800000658282986
complete-iteration_median0.2800000658282986
complete-iteration_min0.2800000658282986
deviation-center-line_max0.5191029476253077
deviation-center-line_mean0.5191029476253077
deviation-center-line_min0.5191029476253077
deviation-heading_max2.298108419831147
deviation-heading_mean2.298108419831147
deviation-heading_median2.298108419831147
deviation-heading_min2.298108419831147
driven_any_max2.545220793697414
driven_any_mean2.545220793697414
driven_any_median2.545220793697414
driven_any_min2.545220793697414
driven_lanedir_consec_max1.7677828520022092
driven_lanedir_consec_mean1.7677828520022092
driven_lanedir_consec_min1.7677828520022092
driven_lanedir_max1.7677828520022092
driven_lanedir_mean1.7677828520022092
driven_lanedir_median1.7677828520022092
driven_lanedir_min1.7677828520022092
get_duckie_state_max0.004408376557486398
get_duckie_state_mean0.004408376557486398
get_duckie_state_median0.004408376557486398
get_duckie_state_min0.004408376557486398
get_robot_state_max0.0038706915719168527
get_robot_state_mean0.0038706915719168527
get_robot_state_median0.0038706915719168527
get_robot_state_min0.0038706915719168527
get_state_dump_max0.0055329226828240725
get_state_dump_mean0.0055329226828240725
get_state_dump_median0.0055329226828240725
get_state_dump_min0.0055329226828240725
get_ui_image_max0.028745115577400504
get_ui_image_mean0.028745115577400504
get_ui_image_median0.028745115577400504
get_ui_image_min0.028745115577400504
in-drivable-lane_max1.649999999999994
in-drivable-lane_mean1.649999999999994
in-drivable-lane_min1.649999999999994
per-episodes
details{"LFP-norm-small_loop-000-ego0": {"driven_any": 2.545220793697414, "get_ui_image": 0.028745115577400504, "step_physics": 0.12842761541341807, "survival_time": 7.649999999999981, "driven_lanedir": 1.7677828520022092, "get_state_dump": 0.0055329226828240725, "get_robot_state": 0.0038706915719168527, "sim_render-ego0": 0.003873219737758884, "get_duckie_state": 0.004408376557486398, "in-drivable-lane": 1.649999999999994, "deviation-heading": 2.298108419831147, "agent_compute-ego0": 0.09442916783419522, "complete-iteration": 0.2800000658282986, "set_robot_commands": 0.0024824901060624557, "deviation-center-line": 0.5191029476253077, "driven_lanedir_consec": 1.7677828520022092, "sim_compute_sim_state": 0.0061055205085060816, "sim_compute_performance-ego0": 0.002028409536782797}}
set_robot_commands_max0.0024824901060624557
set_robot_commands_mean0.0024824901060624557
set_robot_commands_median0.0024824901060624557
set_robot_commands_min0.0024824901060624557
sim_compute_performance-ego0_max0.002028409536782797
sim_compute_performance-ego0_mean0.002028409536782797
sim_compute_performance-ego0_median0.002028409536782797
sim_compute_performance-ego0_min0.002028409536782797
sim_compute_sim_state_max0.0061055205085060816
sim_compute_sim_state_mean0.0061055205085060816
sim_compute_sim_state_median0.0061055205085060816
sim_compute_sim_state_min0.0061055205085060816
sim_render-ego0_max0.003873219737758884
sim_render-ego0_mean0.003873219737758884
sim_render-ego0_median0.003873219737758884
sim_render-ego0_min0.003873219737758884
simulation-passed1
step_physics_max0.12842761541341807
step_physics_mean0.12842761541341807
step_physics_median0.12842761541341807
step_physics_min0.12842761541341807
survival_time_max7.649999999999981
survival_time_mean7.649999999999981
survival_time_min7.649999999999981
No reset possible
6624513730YU CHENBC Net V2aido-LF-sim-validationsim-1of4successnogpu-production-spot-3-050:13:24
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median10.767167195984808
survival_time_median59.99999999999873
deviation-center-line_median2.8148342342968165
in-drivable-lane_median29.04999999999931


other stats
agent_compute-ego0_max0.0566596311891605
agent_compute-ego0_mean0.0566596311891605
agent_compute-ego0_median0.0566596311891605
agent_compute-ego0_min0.0566596311891605
complete-iteration_max0.2998450338393822
complete-iteration_mean0.2998450338393822
complete-iteration_median0.2998450338393822
complete-iteration_min0.2998450338393822
deviation-center-line_max2.8148342342968165
deviation-center-line_mean2.8148342342968165
deviation-center-line_min2.8148342342968165
deviation-heading_max14.711450103936617
deviation-heading_mean14.711450103936617
deviation-heading_median14.711450103936617
deviation-heading_min14.711450103936617
driven_any_max23.888309447774308
driven_any_mean23.888309447774308
driven_any_median23.888309447774308
driven_any_min23.888309447774308
driven_lanedir_consec_max10.767167195984808
driven_lanedir_consec_mean10.767167195984808
driven_lanedir_consec_min10.767167195984808
driven_lanedir_max10.767167195984808
driven_lanedir_mean10.767167195984808
driven_lanedir_median10.767167195984808
driven_lanedir_min10.767167195984808
get_duckie_state_max1.406689468371084e-06
get_duckie_state_mean1.406689468371084e-06
get_duckie_state_median1.406689468371084e-06
get_duckie_state_min1.406689468371084e-06
get_robot_state_max0.004163769063703424
get_robot_state_mean0.004163769063703424
get_robot_state_median0.004163769063703424
get_robot_state_min0.004163769063703424
get_state_dump_max0.005166965360744708
get_state_dump_mean0.005166965360744708
get_state_dump_median0.005166965360744708
get_state_dump_min0.005166965360744708
get_ui_image_max0.03696878148951598
get_ui_image_mean0.03696878148951598
get_ui_image_median0.03696878148951598
get_ui_image_min0.03696878148951598
in-drivable-lane_max29.04999999999931
in-drivable-lane_mean29.04999999999931
in-drivable-lane_min29.04999999999931
per-episodes
details{"LF-norm-techtrack-000-ego0": {"driven_any": 23.888309447774308, "get_ui_image": 0.03696878148951598, "step_physics": 0.1754923940796737, "survival_time": 59.99999999999873, "driven_lanedir": 10.767167195984808, "get_state_dump": 0.005166965360744708, "get_robot_state": 0.004163769063703424, "sim_render-ego0": 0.004306884530581205, "get_duckie_state": 1.406689468371084e-06, "in-drivable-lane": 29.04999999999931, "deviation-heading": 14.711450103936617, "agent_compute-ego0": 0.0566596311891605, "complete-iteration": 0.2998450338393822, "set_robot_commands": 0.002635207005484118, "deviation-center-line": 2.8148342342968165, "driven_lanedir_consec": 10.767167195984808, "sim_compute_sim_state": 0.0120596818185468, "sim_compute_performance-ego0": 0.002298869856390528}}
set_robot_commands_max0.002635207005484118
set_robot_commands_mean0.002635207005484118
set_robot_commands_median0.002635207005484118
set_robot_commands_min0.002635207005484118
sim_compute_performance-ego0_max0.002298869856390528
sim_compute_performance-ego0_mean0.002298869856390528
sim_compute_performance-ego0_median0.002298869856390528
sim_compute_performance-ego0_min0.002298869856390528
sim_compute_sim_state_max0.0120596818185468
sim_compute_sim_state_mean0.0120596818185468
sim_compute_sim_state_median0.0120596818185468
sim_compute_sim_state_min0.0120596818185468
sim_render-ego0_max0.004306884530581205
sim_render-ego0_mean0.004306884530581205
sim_render-ego0_median0.004306884530581205
sim_render-ego0_min0.004306884530581205
simulation-passed1
step_physics_max0.1754923940796737
step_physics_mean0.1754923940796737
step_physics_median0.1754923940796737
step_physics_min0.1754923940796737
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6622113909YU CHENCBC Net v2 - testaido-LF-sim-validationsim-2of4successnogpu-production-spot-3-050:10:27
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median11.446792278969829
survival_time_median59.99999999999873
deviation-center-line_median2.2631433715471196
in-drivable-lane_median30.799999999999272


other stats
agent_compute-ego0_max0.09427865220545532
agent_compute-ego0_mean0.09427865220545532
agent_compute-ego0_median0.09427865220545532
agent_compute-ego0_min0.09427865220545532
complete-iteration_max0.2819748111410403
complete-iteration_mean0.2819748111410403
complete-iteration_median0.2819748111410403
complete-iteration_min0.2819748111410403
deviation-center-line_max2.2631433715471196
deviation-center-line_mean2.2631433715471196
deviation-center-line_min2.2631433715471196
deviation-heading_max13.033217212563518
deviation-heading_mean13.033217212563518
deviation-heading_median13.033217212563518
deviation-heading_min13.033217212563518
driven_any_max25.155373037676036
driven_any_mean25.155373037676036
driven_any_median25.155373037676036
driven_any_min25.155373037676036
driven_lanedir_consec_max11.446792278969829
driven_lanedir_consec_mean11.446792278969829
driven_lanedir_consec_min11.446792278969829
driven_lanedir_max11.446792278969829
driven_lanedir_mean11.446792278969829
driven_lanedir_median11.446792278969829
driven_lanedir_min11.446792278969829
get_duckie_state_max2.0455162689945877e-06
get_duckie_state_mean2.0455162689945877e-06
get_duckie_state_median2.0455162689945877e-06
get_duckie_state_min2.0455162689945877e-06
get_robot_state_max0.004049809945810843
get_robot_state_mean0.004049809945810843
get_robot_state_median0.004049809945810843
get_robot_state_min0.004049809945810843
get_state_dump_max0.004948288276729536
get_state_dump_mean0.004948288276729536
get_state_dump_median0.004948288276729536
get_state_dump_min0.004948288276729536
get_ui_image_max0.02891986594410562
get_ui_image_mean0.02891986594410562
get_ui_image_median0.02891986594410562
get_ui_image_min0.02891986594410562
in-drivable-lane_max30.799999999999272
in-drivable-lane_mean30.799999999999272
in-drivable-lane_min30.799999999999272
per-episodes
details{"LF-norm-small_loop-000-ego0": {"driven_any": 25.155373037676036, "get_ui_image": 0.02891986594410562, "step_physics": 0.13404336678396156, "survival_time": 59.99999999999873, "driven_lanedir": 11.446792278969829, "get_state_dump": 0.004948288276729536, "get_robot_state": 0.004049809945810843, "sim_render-ego0": 0.004063446059215079, "get_duckie_state": 2.0455162689945877e-06, "in-drivable-lane": 30.799999999999272, "deviation-heading": 13.033217212563518, "agent_compute-ego0": 0.09427865220545532, "complete-iteration": 0.2819748111410403, "set_robot_commands": 0.0025953799858379127, "deviation-center-line": 2.2631433715471196, "driven_lanedir_consec": 11.446792278969829, "sim_compute_sim_state": 0.006817400008812237, "sim_compute_performance-ego0": 0.0021627457513102486}}
set_robot_commands_max0.0025953799858379127
set_robot_commands_mean0.0025953799858379127
set_robot_commands_median0.0025953799858379127
set_robot_commands_min0.0025953799858379127
sim_compute_performance-ego0_max0.0021627457513102486
sim_compute_performance-ego0_mean0.0021627457513102486
sim_compute_performance-ego0_median0.0021627457513102486
sim_compute_performance-ego0_min0.0021627457513102486
sim_compute_sim_state_max0.006817400008812237
sim_compute_sim_state_mean0.006817400008812237
sim_compute_sim_state_median0.006817400008812237
sim_compute_sim_state_min0.006817400008812237
sim_render-ego0_max0.004063446059215079
sim_render-ego0_mean0.004063446059215079
sim_render-ego0_median0.004063446059215079
sim_render-ego0_min0.004063446059215079
simulation-passed1
step_physics_max0.13404336678396156
step_physics_mean0.13404336678396156
step_physics_median0.13404336678396156
step_physics_min0.13404336678396156
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6620513944YU CHENCBC Net v2 test - added mar 31 anomaly + mar 28 bc_v1aido-LFP-sim-validationsim-2of4successnogpu-production-spot-3-050:03:55
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median13.550000000000058
in-drivable-lane_median4.600000000000021
driven_lanedir_consec_median3.224544301888936
deviation-center-line_median0.5367071440757034


other stats
agent_compute-ego0_max0.0913183224551818
agent_compute-ego0_mean0.0913183224551818
agent_compute-ego0_median0.0913183224551818
agent_compute-ego0_min0.0913183224551818
complete-iteration_max0.3357047456152299
complete-iteration_mean0.3357047456152299
complete-iteration_median0.3357047456152299
complete-iteration_min0.3357047456152299
deviation-center-line_max0.5367071440757034
deviation-center-line_mean0.5367071440757034
deviation-center-line_min0.5367071440757034
deviation-heading_max2.6881822361757037
deviation-heading_mean2.6881822361757037
deviation-heading_median2.6881822361757037
deviation-heading_min2.6881822361757037
driven_any_max5.414832285246704
driven_any_mean5.414832285246704
driven_any_median5.414832285246704
driven_any_min5.414832285246704
driven_lanedir_consec_max3.224544301888936
driven_lanedir_consec_mean3.224544301888936
driven_lanedir_consec_min3.224544301888936
driven_lanedir_max3.224544301888936
driven_lanedir_mean3.224544301888936
driven_lanedir_median3.224544301888936
driven_lanedir_min3.224544301888936
get_duckie_state_max0.026458600864690898
get_duckie_state_mean0.026458600864690898
get_duckie_state_median0.026458600864690898
get_duckie_state_min0.026458600864690898
get_robot_state_max0.0039521753787994385
get_robot_state_mean0.0039521753787994385
get_robot_state_median0.0039521753787994385
get_robot_state_min0.0039521753787994385
get_state_dump_max0.009143645272535436
get_state_dump_mean0.009143645272535436
get_state_dump_median0.009143645272535436
get_state_dump_min0.009143645272535436
get_ui_image_max0.03457683237159953
get_ui_image_mean0.03457683237159953
get_ui_image_median0.03457683237159953
get_ui_image_min0.03457683237159953
in-drivable-lane_max4.600000000000021
in-drivable-lane_mean4.600000000000021
in-drivable-lane_min4.600000000000021
per-episodes
details{"LFP-norm-loop-000-ego0": {"driven_any": 5.414832285246704, "get_ui_image": 0.03457683237159953, "step_physics": 0.15267186743371627, "survival_time": 13.550000000000058, "driven_lanedir": 3.224544301888936, "get_state_dump": 0.009143645272535436, "get_robot_state": 0.0039521753787994385, "sim_render-ego0": 0.004033241201849545, "get_duckie_state": 0.026458600864690898, "in-drivable-lane": 4.600000000000021, "deviation-heading": 2.6881822361757037, "agent_compute-ego0": 0.0913183224551818, "complete-iteration": 0.3357047456152299, "set_robot_commands": 0.002573391970466165, "deviation-center-line": 0.5367071440757034, "driven_lanedir_consec": 3.224544301888936, "sim_compute_sim_state": 0.008732456494780147, "sim_compute_performance-ego0": 0.0021348087226643283}}
set_robot_commands_max0.002573391970466165
set_robot_commands_mean0.002573391970466165
set_robot_commands_median0.002573391970466165
set_robot_commands_min0.002573391970466165
sim_compute_performance-ego0_max0.0021348087226643283
sim_compute_performance-ego0_mean0.0021348087226643283
sim_compute_performance-ego0_median0.0021348087226643283
sim_compute_performance-ego0_min0.0021348087226643283
sim_compute_sim_state_max0.008732456494780147
sim_compute_sim_state_mean0.008732456494780147
sim_compute_sim_state_median0.008732456494780147
sim_compute_sim_state_min0.008732456494780147
sim_render-ego0_max0.004033241201849545
sim_render-ego0_mean0.004033241201849545
sim_render-ego0_median0.004033241201849545
sim_render-ego0_min0.004033241201849545
simulation-passed1
step_physics_max0.15267186743371627
step_physics_mean0.15267186743371627
step_physics_median0.15267186743371627
step_physics_min0.15267186743371627
survival_time_max13.550000000000058
survival_time_mean13.550000000000058
survival_time_min13.550000000000058
No reset possible
6613813534AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ί3090aido-LF-sim-testingsim-3of4successnogpu-production-spot-3-050:11:40
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median27.710514058490286
survival_time_median59.99999999999873
deviation-center-line_median2.4346641364173975
in-drivable-lane_median0.0


other stats
agent_compute-ego0_max0.014892302186761072
agent_compute-ego0_mean0.014892302186761072
agent_compute-ego0_median0.014892302186761072
agent_compute-ego0_min0.014892302186761072
complete-iteration_max0.2957696757844644
complete-iteration_mean0.2957696757844644
complete-iteration_median0.2957696757844644
complete-iteration_min0.2957696757844644
deviation-center-line_max2.4346641364173975
deviation-center-line_mean2.4346641364173975
deviation-center-line_min2.4346641364173975
deviation-heading_max7.853069197737671
deviation-heading_mean7.853069197737671
deviation-heading_median7.853069197737671
deviation-heading_min7.853069197737671
driven_any_max28.077943998016508
driven_any_mean28.077943998016508
driven_any_median28.077943998016508
driven_any_min28.077943998016508
driven_lanedir_consec_max27.710514058490286
driven_lanedir_consec_mean27.710514058490286
driven_lanedir_consec_min27.710514058490286
driven_lanedir_max27.710514058490286
driven_lanedir_mean27.710514058490286
driven_lanedir_median27.710514058490286
driven_lanedir_min27.710514058490286
get_duckie_state_max1.5043596939480772e-06
get_duckie_state_mean1.5043596939480772e-06
get_duckie_state_median1.5043596939480772e-06
get_duckie_state_min1.5043596939480772e-06
get_robot_state_max0.00418074621348258
get_robot_state_mean0.00418074621348258
get_robot_state_median0.00418074621348258
get_robot_state_min0.00418074621348258
get_state_dump_max0.00511528590041136
get_state_dump_mean0.00511528590041136
get_state_dump_median0.00511528590041136
get_state_dump_min0.00511528590041136
get_ui_image_max0.03944532361058371
get_ui_image_mean0.03944532361058371
get_ui_image_median0.03944532361058371
get_ui_image_min0.03944532361058371
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 28.077943998016508, "get_ui_image": 0.03944532361058371, "step_physics": 0.208109127095498, "survival_time": 59.99999999999873, "driven_lanedir": 27.710514058490286, "get_state_dump": 0.00511528590041136, "get_robot_state": 0.00418074621348258, "sim_render-ego0": 0.004186509352341778, "get_duckie_state": 1.5043596939480772e-06, "in-drivable-lane": 0.0, "deviation-heading": 7.853069197737671, "agent_compute-ego0": 0.014892302186761072, "complete-iteration": 0.2957696757844644, "set_robot_commands": 0.0025463886403918365, "deviation-center-line": 2.4346641364173975, "driven_lanedir_consec": 27.710514058490286, "sim_compute_sim_state": 0.014879715829765073, "sim_compute_performance-ego0": 0.002316239275205741}}
set_robot_commands_max0.0025463886403918365
set_robot_commands_mean0.0025463886403918365
set_robot_commands_median0.0025463886403918365
set_robot_commands_min0.0025463886403918365
sim_compute_performance-ego0_max0.002316239275205741
sim_compute_performance-ego0_mean0.002316239275205741
sim_compute_performance-ego0_median0.002316239275205741
sim_compute_performance-ego0_min0.002316239275205741
sim_compute_sim_state_max0.014879715829765073
sim_compute_sim_state_mean0.014879715829765073
sim_compute_sim_state_median0.014879715829765073
sim_compute_sim_state_min0.014879715829765073
sim_render-ego0_max0.004186509352341778
sim_render-ego0_mean0.004186509352341778
sim_render-ego0_median0.004186509352341778
sim_render-ego0_min0.004186509352341778
simulation-passed1
step_physics_max0.208109127095498
step_physics_mean0.208109127095498
step_physics_median0.208109127095498
step_physics_min0.208109127095498
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6612714032YU CHENCBC V2, mar28 bc, mar31_apr6 anomaly aido-LFP-sim-validationsim-1of4successnogpu-production-spot-3-050:02:07
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median6.949999999999983
in-drivable-lane_median3.3499999999999894
driven_lanedir_consec_median1.25355486071285
deviation-center-line_median0.2343581734409109


other stats
agent_compute-ego0_max0.10357789822987148
agent_compute-ego0_mean0.10357789822987148
agent_compute-ego0_median0.10357789822987148
agent_compute-ego0_min0.10357789822987148
complete-iteration_max0.3147784556661333
complete-iteration_mean0.3147784556661333
complete-iteration_median0.3147784556661333
complete-iteration_min0.3147784556661333
deviation-center-line_max0.2343581734409109
deviation-center-line_mean0.2343581734409109
deviation-center-line_min0.2343581734409109
deviation-heading_max1.3516471361061388
deviation-heading_mean1.3516471361061388
deviation-heading_median1.3516471361061388
deviation-heading_min1.3516471361061388
driven_any_max2.115878874436983
driven_any_mean2.115878874436983
driven_any_median2.115878874436983
driven_any_min2.115878874436983
driven_lanedir_consec_max1.25355486071285
driven_lanedir_consec_mean1.25355486071285
driven_lanedir_consec_min1.25355486071285
driven_lanedir_max1.25355486071285
driven_lanedir_mean1.25355486071285
driven_lanedir_median1.25355486071285
driven_lanedir_min1.25355486071285
get_duckie_state_max0.00496297904423305
get_duckie_state_mean0.00496297904423305
get_duckie_state_median0.00496297904423305
get_duckie_state_min0.00496297904423305
get_robot_state_max0.004314025810786656
get_robot_state_mean0.004314025810786656
get_robot_state_median0.004314025810786656
get_robot_state_min0.004314025810786656
get_state_dump_max0.006370353698730469
get_state_dump_mean0.006370353698730469
get_state_dump_median0.006370353698730469
get_state_dump_min0.006370353698730469
get_ui_image_max0.031547648566109796
get_ui_image_mean0.031547648566109796
get_ui_image_median0.031547648566109796
get_ui_image_min0.031547648566109796
in-drivable-lane_max3.3499999999999894
in-drivable-lane_mean3.3499999999999894
in-drivable-lane_min3.3499999999999894
per-episodes
details{"LFP-norm-small_loop-000-ego0": {"driven_any": 2.115878874436983, "get_ui_image": 0.031547648566109796, "step_physics": 0.14788354975836618, "survival_time": 6.949999999999983, "driven_lanedir": 1.25355486071285, "get_state_dump": 0.006370353698730469, "get_robot_state": 0.004314025810786656, "sim_render-ego0": 0.004198980331420898, "get_duckie_state": 0.00496297904423305, "in-drivable-lane": 3.3499999999999894, "deviation-heading": 1.3516471361061388, "agent_compute-ego0": 0.10357789822987148, "complete-iteration": 0.3147784556661333, "set_robot_commands": 0.002756260122571673, "deviation-center-line": 0.2343581734409109, "driven_lanedir_consec": 1.25355486071285, "sim_compute_sim_state": 0.0067704950060163225, "sim_compute_performance-ego0": 0.002281195776803153}}
set_robot_commands_max0.002756260122571673
set_robot_commands_mean0.002756260122571673
set_robot_commands_median0.002756260122571673
set_robot_commands_min0.002756260122571673
sim_compute_performance-ego0_max0.002281195776803153
sim_compute_performance-ego0_mean0.002281195776803153
sim_compute_performance-ego0_median0.002281195776803153
sim_compute_performance-ego0_min0.002281195776803153
sim_compute_sim_state_max0.0067704950060163225
sim_compute_sim_state_mean0.0067704950060163225
sim_compute_sim_state_median0.0067704950060163225
sim_compute_sim_state_min0.0067704950060163225
sim_render-ego0_max0.004198980331420898
sim_render-ego0_mean0.004198980331420898
sim_render-ego0_median0.004198980331420898
sim_render-ego0_min0.004198980331420898
simulation-passed1
step_physics_max0.14788354975836618
step_physics_mean0.14788354975836618
step_physics_median0.14788354975836618
step_physics_min0.14788354975836618
survival_time_max6.949999999999983
survival_time_mean6.949999999999983
survival_time_min6.949999999999983
No reset possible
6611713511AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFP-sim-validationsim-1of4successnogpu-production-spot-3-050:02:18
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median6.0499999999999865
in-drivable-lane_median0.0
driven_lanedir_consec_median2.43689235867362
deviation-center-line_median0.22753841409584816


other stats
agent_compute-ego0_max0.015248304507771477
agent_compute-ego0_mean0.015248304507771477
agent_compute-ego0_median0.015248304507771477
agent_compute-ego0_min0.015248304507771477
complete-iteration_max0.2084383475975912
complete-iteration_mean0.2084383475975912
complete-iteration_median0.2084383475975912
complete-iteration_min0.2084383475975912
deviation-center-line_max0.22753841409584816
deviation-center-line_mean0.22753841409584816
deviation-center-line_min0.22753841409584816
deviation-heading_max0.7875408349176081
deviation-heading_mean0.7875408349176081
deviation-heading_median0.7875408349176081
deviation-heading_min0.7875408349176081
driven_any_max2.4573078204188
driven_any_mean2.4573078204188
driven_any_median2.4573078204188
driven_any_min2.4573078204188
driven_lanedir_consec_max2.43689235867362
driven_lanedir_consec_mean2.43689235867362
driven_lanedir_consec_min2.43689235867362
driven_lanedir_max2.43689235867362
driven_lanedir_mean2.43689235867362
driven_lanedir_median2.43689235867362
driven_lanedir_min2.43689235867362
get_duckie_state_max0.004887592597085921
get_duckie_state_mean0.004887592597085921
get_duckie_state_median0.004887592597085921
get_duckie_state_min0.004887592597085921
get_robot_state_max0.004050166880498167
get_robot_state_mean0.004050166880498167
get_robot_state_median0.004050166880498167
get_robot_state_min0.004050166880498167
get_state_dump_max0.0058498382568359375
get_state_dump_mean0.0058498382568359375
get_state_dump_median0.0058498382568359375
get_state_dump_min0.0058498382568359375
get_ui_image_max0.029998701126849065
get_ui_image_mean0.029998701126849065
get_ui_image_median0.029998701126849065
get_ui_image_min0.029998701126849065
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFP-norm-small_loop-000-ego0": {"driven_any": 2.4573078204188, "get_ui_image": 0.029998701126849065, "step_physics": 0.13303193107980196, "survival_time": 6.0499999999999865, "driven_lanedir": 2.43689235867362, "get_state_dump": 0.0058498382568359375, "get_robot_state": 0.004050166880498167, "sim_render-ego0": 0.004157771829698906, "get_duckie_state": 0.004887592597085921, "in-drivable-lane": 0.0, "deviation-heading": 0.7875408349176081, "agent_compute-ego0": 0.015248304507771477, "complete-iteration": 0.2084383475975912, "set_robot_commands": 0.0024672648945792777, "deviation-center-line": 0.22753841409584816, "driven_lanedir_consec": 2.43689235867362, "sim_compute_sim_state": 0.006464647465064878, "sim_compute_performance-ego0": 0.002179077414215588}}
set_robot_commands_max0.0024672648945792777
set_robot_commands_mean0.0024672648945792777
set_robot_commands_median0.0024672648945792777
set_robot_commands_min0.0024672648945792777
sim_compute_performance-ego0_max0.002179077414215588
sim_compute_performance-ego0_mean0.002179077414215588
sim_compute_performance-ego0_median0.002179077414215588
sim_compute_performance-ego0_min0.002179077414215588
sim_compute_sim_state_max0.006464647465064878
sim_compute_sim_state_mean0.006464647465064878
sim_compute_sim_state_median0.006464647465064878
sim_compute_sim_state_min0.006464647465064878
sim_render-ego0_max0.004157771829698906
sim_render-ego0_mean0.004157771829698906
sim_render-ego0_median0.004157771829698906
sim_render-ego0_min0.004157771829698906
simulation-passed1
step_physics_max0.13303193107980196
step_physics_mean0.13303193107980196
step_physics_median0.13303193107980196
step_physics_min0.13303193107980196
survival_time_max6.0499999999999865
survival_time_mean6.0499999999999865
survival_time_min6.0499999999999865
No reset possible
6611113511AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFP-sim-validationsim-1of4failednogpu-production-spot-3-050:00:42
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 275, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6610513513AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFV-sim-validationsim-2of4failednogpu-production-spot-3-050:01:13
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 275, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6609813513AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFV-sim-validationsim-2of4failednogpu-production-spot-3-050:01:13
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 275, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6609413513AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFV-sim-validationsim-2of4failednogpu-production-spot-3-050:01:13
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 275, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6609013513AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFV-sim-validationsim-2of4failednogpu-production-spot-3-050:01:15
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 275, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6608613534AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ί3090aido-LF-sim-testingsim-2of4failednogpu-production-spot-3-050:00:41
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 126, in <module>
              ||     main()
              ||   File "solution.py", line 122, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 126, in <module>
              || |     main()
              || |   File "solution.py", line 122, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 275, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6608213534AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ί3090aido-LF-sim-testingsim-2of4failednogpu-production-spot-3-050:00:40
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 268, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 126, in <module>
              ||     main()
              ||   File "solution.py", line 122, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 126, in <module>
              || |     main()
              || |   File "solution.py", line 122, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 275, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6607113572MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFV-sim-validationsim-0of4successnogpu-production-spot-3-050:02:49
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median6.999999999999983
in-drivable-lane_median0.0
driven_lanedir_consec_median2.85112686774763
deviation-center-line_median0.2794231493783243


other stats
agent_compute-ego0_max0.04991524777513869
agent_compute-ego0_mean0.04991524777513869
agent_compute-ego0_median0.04991524777513869
agent_compute-ego0_min0.04991524777513869
agent_compute-npc0_max0.027704443491942492
agent_compute-npc0_mean0.027704443491942492
agent_compute-npc0_median0.027704443491942492
agent_compute-npc0_min0.027704443491942492
complete-iteration_max0.3478583055185088
complete-iteration_mean0.3478583055185088
complete-iteration_median0.3478583055185088
complete-iteration_min0.3478583055185088
deviation-center-line_max0.2794231493783243
deviation-center-line_mean0.2794231493783243
deviation-center-line_min0.2794231493783243
deviation-heading_max0.984391238876164
deviation-heading_mean0.984391238876164
deviation-heading_median0.984391238876164
deviation-heading_min0.984391238876164
driven_any_max2.8815266506942567
driven_any_mean2.8815266506942567
driven_any_median2.8815266506942567
driven_any_min2.8815266506942567
driven_lanedir_consec_max2.85112686774763
driven_lanedir_consec_mean2.85112686774763
driven_lanedir_consec_min2.85112686774763
driven_lanedir_max2.85112686774763
driven_lanedir_mean2.85112686774763
driven_lanedir_median2.85112686774763
driven_lanedir_min2.85112686774763
get_duckie_state_max2.220167335889018e-06
get_duckie_state_mean2.220167335889018e-06
get_duckie_state_median2.220167335889018e-06
get_duckie_state_min2.220167335889018e-06
get_robot_state_max0.008738908361881337
get_robot_state_mean0.008738908361881337
get_robot_state_median0.008738908361881337
get_robot_state_min0.008738908361881337
get_state_dump_max0.007654724391639656
get_state_dump_mean0.007654724391639656
get_state_dump_median0.007654724391639656
get_state_dump_min0.007654724391639656
get_ui_image_max0.03608237936141643
get_ui_image_mean0.03608237936141643
get_ui_image_median0.03608237936141643
get_ui_image_min0.03608237936141643
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFV-norm-small_loop-000-ego0": {"driven_any": 2.8815266506942567, "get_ui_image": 0.03608237936141643, "step_physics": 0.18688061558608468, "survival_time": 6.999999999999983, "driven_lanedir": 2.85112686774763, "get_state_dump": 0.007654724391639656, "get_robot_state": 0.008738908361881337, "sim_render-ego0": 0.004450219742795254, "sim_render-npc0": 0.0044182429076932, "get_duckie_state": 2.220167335889018e-06, "in-drivable-lane": 0.0, "deviation-heading": 0.984391238876164, "agent_compute-ego0": 0.04991524777513869, "agent_compute-npc0": 0.027704443491942492, "complete-iteration": 0.3478583055185088, "set_robot_commands": 0.0026737189461998904, "deviation-center-line": 0.2794231493783243, "driven_lanedir_consec": 2.85112686774763, "sim_compute_sim_state": 0.011806726455688477, "sim_compute_performance-ego0": 0.0024553765641882066, "sim_compute_performance-npc0": 0.0023514497364666444}}
set_robot_commands_max0.0026737189461998904
set_robot_commands_mean0.0026737189461998904
set_robot_commands_median0.0026737189461998904
set_robot_commands_min0.0026737189461998904
sim_compute_performance-ego0_max0.0024553765641882066
sim_compute_performance-ego0_mean0.0024553765641882066
sim_compute_performance-ego0_median0.0024553765641882066
sim_compute_performance-ego0_min0.0024553765641882066
sim_compute_performance-npc0_max0.0023514497364666444
sim_compute_performance-npc0_mean0.0023514497364666444
sim_compute_performance-npc0_median0.0023514497364666444
sim_compute_performance-npc0_min0.0023514497364666444
sim_compute_sim_state_max0.011806726455688477
sim_compute_sim_state_mean0.011806726455688477
sim_compute_sim_state_median0.011806726455688477
sim_compute_sim_state_min0.011806726455688477
sim_render-ego0_max0.004450219742795254
sim_render-ego0_mean0.004450219742795254
sim_render-ego0_median0.004450219742795254
sim_render-ego0_min0.004450219742795254
sim_render-npc0_max0.0044182429076932
sim_render-npc0_mean0.0044182429076932
sim_render-npc0_median0.0044182429076932
sim_render-npc0_min0.0044182429076932
simulation-passed1
step_physics_max0.18688061558608468
step_physics_mean0.18688061558608468
step_physics_median0.18688061558608468
step_physics_min0.18688061558608468
survival_time_max6.999999999999983
survival_time_mean6.999999999999983
survival_time_min6.999999999999983
No reset possible
6605213572MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFV-sim-validationsim-0of4successnogpu-production-spot-3-050:03:15
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median7.399999999999982
in-drivable-lane_median0.0
driven_lanedir_consec_median2.953239553537632
deviation-center-line_median0.345925227904316


other stats
agent_compute-ego0_max0.05061866933067373
agent_compute-ego0_mean0.05061866933067373
agent_compute-ego0_median0.05061866933067373
agent_compute-ego0_min0.05061866933067373
agent_compute-npc0_max0.02652624949512866
agent_compute-npc0_mean0.02652624949512866
agent_compute-npc0_median0.02652624949512866
agent_compute-npc0_min0.02652624949512866
complete-iteration_max0.3616088828784507
complete-iteration_mean0.3616088828784507
complete-iteration_median0.3616088828784507
complete-iteration_min0.3616088828784507
deviation-center-line_max0.345925227904316
deviation-center-line_mean0.345925227904316
deviation-center-line_min0.345925227904316
deviation-heading_max1.1698156622390756
deviation-heading_mean1.1698156622390756
deviation-heading_median1.1698156622390756
deviation-heading_min1.1698156622390756
driven_any_max3.0007728529946887
driven_any_mean3.0007728529946887
driven_any_median3.0007728529946887
driven_any_min3.0007728529946887
driven_lanedir_consec_max2.953239553537632
driven_lanedir_consec_mean2.953239553537632
driven_lanedir_consec_min2.953239553537632
driven_lanedir_max2.953239553537632
driven_lanedir_mean2.953239553537632
driven_lanedir_median2.953239553537632
driven_lanedir_min2.953239553537632
get_duckie_state_max1.5601215746578757e-06
get_duckie_state_mean1.5601215746578757e-06
get_duckie_state_median1.5601215746578757e-06
get_duckie_state_min1.5601215746578757e-06
get_robot_state_max0.008556271559440049
get_robot_state_mean0.008556271559440049
get_robot_state_median0.008556271559440049
get_robot_state_min0.008556271559440049
get_state_dump_max0.007823467254638672
get_state_dump_mean0.007823467254638672
get_state_dump_median0.007823467254638672
get_state_dump_min0.007823467254638672
get_ui_image_max0.037486228366826205
get_ui_image_mean0.037486228366826205
get_ui_image_median0.037486228366826205
get_ui_image_min0.037486228366826205
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFV-norm-small_loop-000-ego0": {"driven_any": 3.0007728529946887, "get_ui_image": 0.037486228366826205, "step_physics": 0.1989319148479692, "survival_time": 7.399999999999982, "driven_lanedir": 2.953239553537632, "get_state_dump": 0.007823467254638672, "get_robot_state": 0.008556271559440049, "sim_render-ego0": 0.004536878342596477, "sim_render-npc0": 0.0044952174961166895, "get_duckie_state": 1.5601215746578757e-06, "in-drivable-lane": 0.0, "deviation-heading": 1.1698156622390756, "agent_compute-ego0": 0.05061866933067373, "agent_compute-npc0": 0.02652624949512866, "complete-iteration": 0.3616088828784507, "set_robot_commands": 0.002717394156744016, "deviation-center-line": 0.345925227904316, "driven_lanedir_consec": 2.953239553537632, "sim_compute_sim_state": 0.012228751342568622, "sim_compute_performance-ego0": 0.002449144453010303, "sim_compute_performance-npc0": 0.002451410229574114}}
set_robot_commands_max0.002717394156744016
set_robot_commands_mean0.002717394156744016
set_robot_commands_median0.002717394156744016
set_robot_commands_min0.002717394156744016
sim_compute_performance-ego0_max0.002449144453010303
sim_compute_performance-ego0_mean0.002449144453010303
sim_compute_performance-ego0_median0.002449144453010303
sim_compute_performance-ego0_min0.002449144453010303
sim_compute_performance-npc0_max0.002451410229574114
sim_compute_performance-npc0_mean0.002451410229574114
sim_compute_performance-npc0_median0.002451410229574114
sim_compute_performance-npc0_min0.002451410229574114
sim_compute_sim_state_max0.012228751342568622
sim_compute_sim_state_mean0.012228751342568622
sim_compute_sim_state_median0.012228751342568622
sim_compute_sim_state_min0.012228751342568622
sim_render-ego0_max0.004536878342596477
sim_render-ego0_mean0.004536878342596477
sim_render-ego0_median0.004536878342596477
sim_render-ego0_min0.004536878342596477
sim_render-npc0_max0.0044952174961166895
sim_render-npc0_mean0.0044952174961166895
sim_render-npc0_median0.0044952174961166895
sim_render-npc0_min0.0044952174961166895
simulation-passed1
step_physics_max0.1989319148479692
step_physics_mean0.1989319148479692
step_physics_median0.1989319148479692
step_physics_min0.1989319148479692
survival_time_max7.399999999999982
survival_time_mean7.399999999999982
survival_time_min7.399999999999982
No reset possible
6604213585Andras Beres202-1aido-LFP-sim-testingsim-3of4successnogpu-production-spot-3-050:03:29
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median13.300000000000054
in-drivable-lane_median1.6500000000000012
driven_lanedir_consec_median4.819302414609702
deviation-center-line_median0.9615793674243132


other stats
agent_compute-ego0_max0.0179847197586231
agent_compute-ego0_mean0.0179847197586231
agent_compute-ego0_median0.0179847197586231
agent_compute-ego0_min0.0179847197586231
complete-iteration_max0.3267297673314698
complete-iteration_mean0.3267297673314698
complete-iteration_median0.3267297673314698
complete-iteration_min0.3267297673314698
deviation-center-line_max0.9615793674243132
deviation-center-line_mean0.9615793674243132
deviation-center-line_min0.9615793674243132
deviation-heading_max2.328233497638378
deviation-heading_mean2.328233497638378
deviation-heading_median2.328233497638378
deviation-heading_min2.328233497638378
driven_any_max5.555300996850063
driven_any_mean5.555300996850063
driven_any_median5.555300996850063
driven_any_min5.555300996850063
driven_lanedir_consec_max4.819302414609702
driven_lanedir_consec_mean4.819302414609702
driven_lanedir_consec_min4.819302414609702
driven_lanedir_max4.819302414609702
driven_lanedir_mean4.819302414609702
driven_lanedir_median4.819302414609702
driven_lanedir_min4.819302414609702
get_duckie_state_max0.022534069497040597
get_duckie_state_mean0.022534069497040597
get_duckie_state_median0.022534069497040597
get_duckie_state_min0.022534069497040597
get_robot_state_max0.003995019398378522
get_robot_state_mean0.003995019398378522
get_robot_state_median0.003995019398378522
get_robot_state_min0.003995019398378522
get_state_dump_max0.008673649155691769
get_state_dump_mean0.008673649155691769
get_state_dump_median0.008673649155691769
get_state_dump_min0.008673649155691769
get_ui_image_max0.03876713777749279
get_ui_image_mean0.03876713777749279
get_ui_image_median0.03876713777749279
get_ui_image_min0.03876713777749279
in-drivable-lane_max1.6500000000000012
in-drivable-lane_mean1.6500000000000012
in-drivable-lane_min1.6500000000000012
per-episodes
details{"LFP-norm-techtrack-000-ego0": {"driven_any": 5.555300996850063, "get_ui_image": 0.03876713777749279, "step_physics": 0.21178514323431008, "survival_time": 13.300000000000054, "driven_lanedir": 4.819302414609702, "get_state_dump": 0.008673649155691769, "get_robot_state": 0.003995019398378522, "sim_render-ego0": 0.004161377524615227, "get_duckie_state": 0.022534069497040597, "in-drivable-lane": 1.6500000000000012, "deviation-heading": 2.328233497638378, "agent_compute-ego0": 0.0179847197586231, "complete-iteration": 0.3267297673314698, "set_robot_commands": 0.0023910570680425406, "deviation-center-line": 0.9615793674243132, "driven_lanedir_consec": 4.819302414609702, "sim_compute_sim_state": 0.0141344570488519, "sim_compute_performance-ego0": 0.0021997721454177456}}
set_robot_commands_max0.0023910570680425406
set_robot_commands_mean0.0023910570680425406
set_robot_commands_median0.0023910570680425406
set_robot_commands_min0.0023910570680425406
sim_compute_performance-ego0_max0.0021997721454177456
sim_compute_performance-ego0_mean0.0021997721454177456
sim_compute_performance-ego0_median0.0021997721454177456
sim_compute_performance-ego0_min0.0021997721454177456
sim_compute_sim_state_max0.0141344570488519
sim_compute_sim_state_mean0.0141344570488519
sim_compute_sim_state_median0.0141344570488519
sim_compute_sim_state_min0.0141344570488519
sim_render-ego0_max0.004161377524615227
sim_render-ego0_mean0.004161377524615227
sim_render-ego0_median0.004161377524615227
sim_render-ego0_min0.004161377524615227
simulation-passed1
step_physics_max0.21178514323431008
step_physics_mean0.21178514323431008
step_physics_median0.21178514323431008
step_physics_min0.21178514323431008
survival_time_max13.300000000000054
survival_time_mean13.300000000000054
survival_time_min13.300000000000054
No reset possible
6601413991Frank (Chude) QianΒ πŸ‡¨πŸ‡¦CBC Net - MixTraining - Expert LF Human LFPaido-LF-sim-validationsim-1of4successnogpu-production-spot-3-050:11:11
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median14.111028585628368
survival_time_median59.99999999999873
deviation-center-line_median3.2127630203326025
in-drivable-lane_median20.449999999999733


other stats
agent_compute-ego0_max0.05569522347081015
agent_compute-ego0_mean0.05569522347081015
agent_compute-ego0_median0.05569522347081015
agent_compute-ego0_min0.05569522347081015
complete-iteration_max0.2947149526864464
complete-iteration_mean0.2947149526864464
complete-iteration_median0.2947149526864464
complete-iteration_min0.2947149526864464
deviation-center-line_max3.2127630203326025
deviation-center-line_mean3.2127630203326025
deviation-center-line_min3.2127630203326025
deviation-heading_max14.9742682141222
deviation-heading_mean14.9742682141222
deviation-heading_median14.9742682141222
deviation-heading_min14.9742682141222
driven_any_max29.247562560425617
driven_any_mean29.247562560425617
driven_any_median29.247562560425617
driven_any_min29.247562560425617
driven_lanedir_consec_max14.111028585628368
driven_lanedir_consec_mean14.111028585628368
driven_lanedir_consec_min14.111028585628368
driven_lanedir_max17.13073899480601
driven_lanedir_mean17.13073899480601
driven_lanedir_median17.13073899480601
driven_lanedir_min17.13073899480601
get_duckie_state_max1.5170647639418325e-06
get_duckie_state_mean1.5170647639418325e-06
get_duckie_state_median1.5170647639418325e-06
get_duckie_state_min1.5170647639418325e-06
get_robot_state_max0.004078451143911141
get_robot_state_mean0.004078451143911141
get_robot_state_median0.004078451143911141
get_robot_state_min0.004078451143911141
get_state_dump_max0.0050869960769030775
get_state_dump_mean0.0050869960769030775
get_state_dump_median0.0050869960769030775
get_state_dump_min0.0050869960769030775
get_ui_image_max0.035481322516410375
get_ui_image_mean0.035481322516410375
get_ui_image_median0.035481322516410375
get_ui_image_min0.035481322516410375
in-drivable-lane_max20.449999999999733
in-drivable-lane_mean20.449999999999733
in-drivable-lane_min20.449999999999733
per-episodes
details{"LF-norm-techtrack-000-ego0": {"driven_any": 29.247562560425617, "get_ui_image": 0.035481322516410375, "step_physics": 0.1717076470313918, "survival_time": 59.99999999999873, "driven_lanedir": 17.13073899480601, "get_state_dump": 0.0050869960769030775, "get_robot_state": 0.004078451143911141, "sim_render-ego0": 0.004199034566188435, "get_duckie_state": 1.5170647639418325e-06, "in-drivable-lane": 20.449999999999733, "deviation-heading": 14.9742682141222, "agent_compute-ego0": 0.05569522347081015, "complete-iteration": 0.2947149526864464, "set_robot_commands": 0.002577772942510473, "deviation-center-line": 3.2127630203326025, "driven_lanedir_consec": 14.111028585628368, "sim_compute_sim_state": 0.01354825208824342, "sim_compute_performance-ego0": 0.002237082718810273}}
set_robot_commands_max0.002577772942510473
set_robot_commands_mean0.002577772942510473
set_robot_commands_median0.002577772942510473
set_robot_commands_min0.002577772942510473
sim_compute_performance-ego0_max0.002237082718810273
sim_compute_performance-ego0_mean0.002237082718810273
sim_compute_performance-ego0_median0.002237082718810273
sim_compute_performance-ego0_min0.002237082718810273
sim_compute_sim_state_max0.01354825208824342
sim_compute_sim_state_mean0.01354825208824342
sim_compute_sim_state_median0.01354825208824342
sim_compute_sim_state_min0.01354825208824342
sim_render-ego0_max0.004199034566188435
sim_render-ego0_mean0.004199034566188435
sim_render-ego0_median0.004199034566188435
sim_render-ego0_min0.004199034566188435
simulation-passed1
step_physics_max0.1717076470313918
step_physics_mean0.1717076470313918
step_physics_median0.1717076470313918
step_physics_min0.1717076470313918
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6594914033YU CHENCBC V2, mar28_apr6 bc, mar31_apr6 anomaly aido-LF-sim-validationsim-1of4successnogpu-production-spot-3-050:11:35
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median14.866736588852865
survival_time_median59.99999999999873
deviation-center-line_median3.2676626906243387
in-drivable-lane_median16.649999999999594


other stats
agent_compute-ego0_max0.08955606512979702
agent_compute-ego0_mean0.08955606512979702
agent_compute-ego0_median0.08955606512979702
agent_compute-ego0_min0.08955606512979702
complete-iteration_max0.318546664605629
complete-iteration_mean0.318546664605629
complete-iteration_median0.318546664605629
complete-iteration_min0.318546664605629
deviation-center-line_max3.2676626906243387
deviation-center-line_mean3.2676626906243387
deviation-center-line_min3.2676626906243387
deviation-heading_max13.532396906661964
deviation-heading_mean13.532396906661964
deviation-heading_median13.532396906661964
deviation-heading_min13.532396906661964
driven_any_max21.5516838411515
driven_any_mean21.5516838411515
driven_any_median21.5516838411515
driven_any_min21.5516838411515
driven_lanedir_consec_max14.866736588852865
driven_lanedir_consec_mean14.866736588852865
driven_lanedir_consec_min14.866736588852865
driven_lanedir_max14.866736588852865
driven_lanedir_mean14.866736588852865
driven_lanedir_median14.866736588852865
driven_lanedir_min14.866736588852865
get_duckie_state_max1.4537379306917088e-06
get_duckie_state_mean1.4537379306917088e-06
get_duckie_state_median1.4537379306917088e-06
get_duckie_state_min1.4537379306917088e-06
get_robot_state_max0.00409915484953284
get_robot_state_mean0.00409915484953284
get_robot_state_median0.00409915484953284
get_robot_state_min0.00409915484953284
get_state_dump_max0.004998676187291332
get_state_dump_mean0.004998676187291332
get_state_dump_median0.004998676187291332
get_state_dump_min0.004998676187291332
get_ui_image_max0.03550669255602072
get_ui_image_mean0.03550669255602072
get_ui_image_median0.03550669255602072
get_ui_image_min0.03550669255602072
in-drivable-lane_max16.649999999999594
in-drivable-lane_mean16.649999999999594
in-drivable-lane_min16.649999999999594
per-episodes
details{"LF-norm-techtrack-000-ego0": {"driven_any": 21.5516838411515, "get_ui_image": 0.03550669255602072, "step_physics": 0.16129540344956117, "survival_time": 59.99999999999873, "driven_lanedir": 14.866736588852865, "get_state_dump": 0.004998676187291332, "get_robot_state": 0.00409915484953284, "sim_render-ego0": 0.004175517680146712, "get_duckie_state": 1.4537379306917088e-06, "in-drivable-lane": 16.649999999999594, "deviation-heading": 13.532396906661964, "agent_compute-ego0": 0.08955606512979702, "complete-iteration": 0.318546664605629, "set_robot_commands": 0.002625451893929538, "deviation-center-line": 3.2676626906243387, "driven_lanedir_consec": 14.866736588852865, "sim_compute_sim_state": 0.013980012452175576, "sim_compute_performance-ego0": 0.0022099470715042356}}
set_robot_commands_max0.002625451893929538
set_robot_commands_mean0.002625451893929538
set_robot_commands_median0.002625451893929538
set_robot_commands_min0.002625451893929538
sim_compute_performance-ego0_max0.0022099470715042356
sim_compute_performance-ego0_mean0.0022099470715042356
sim_compute_performance-ego0_median0.0022099470715042356
sim_compute_performance-ego0_min0.0022099470715042356
sim_compute_sim_state_max0.013980012452175576
sim_compute_sim_state_mean0.013980012452175576
sim_compute_sim_state_median0.013980012452175576
sim_compute_sim_state_min0.013980012452175576
sim_render-ego0_max0.004175517680146712
sim_render-ego0_mean0.004175517680146712
sim_render-ego0_median0.004175517680146712
sim_render-ego0_min0.004175517680146712
simulation-passed1
step_physics_max0.16129540344956117
step_physics_mean0.16129540344956117
step_physics_median0.16129540344956117
step_physics_min0.16129540344956117
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6588013579Andras Beres202-1aido-LF-sim-testingsim-2of4successnogpu-production-spot-3-050:11:49
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median29.586094579873215
survival_time_median59.99999999999873
deviation-center-line_median4.091135310484281
in-drivable-lane_median0.0


other stats
agent_compute-ego0_max0.020837608126180556
agent_compute-ego0_mean0.020837608126180556
agent_compute-ego0_median0.020837608126180556
agent_compute-ego0_min0.020837608126180556
complete-iteration_max0.19098477796352872
complete-iteration_mean0.19098477796352872
complete-iteration_median0.19098477796352872
complete-iteration_min0.19098477796352872
deviation-center-line_max4.091135310484281
deviation-center-line_mean4.091135310484281
deviation-center-line_min4.091135310484281
deviation-heading_max9.77629830751269
deviation-heading_mean9.77629830751269
deviation-heading_median9.77629830751269
deviation-heading_min9.77629830751269
driven_any_max30.1774953935896
driven_any_mean30.1774953935896
driven_any_median30.1774953935896
driven_any_min30.1774953935896
driven_lanedir_consec_max29.586094579873215
driven_lanedir_consec_mean29.586094579873215
driven_lanedir_consec_min29.586094579873215
driven_lanedir_max29.586094579873215
driven_lanedir_mean29.586094579873215
driven_lanedir_median29.586094579873215
driven_lanedir_min29.586094579873215
get_duckie_state_max1.9013931312529273e-06
get_duckie_state_mean1.9013931312529273e-06
get_duckie_state_median1.9013931312529273e-06
get_duckie_state_min1.9013931312529273e-06
get_robot_state_max0.003975508115770021
get_robot_state_mean0.003975508115770021
get_robot_state_median0.003975508115770021
get_robot_state_min0.003975508115770021
get_state_dump_max0.004999882176357146
get_state_dump_mean0.004999882176357146
get_state_dump_median0.004999882176357146
get_state_dump_min0.004999882176357146
get_ui_image_max0.028053753977512735
get_ui_image_mean0.028053753977512735
get_ui_image_median0.028053753977512735
get_ui_image_min0.028053753977512735
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-small_loop-000-ego0": {"driven_any": 30.1774953935896, "get_ui_image": 0.028053753977512735, "step_physics": 0.11757553050559724, "survival_time": 59.99999999999873, "driven_lanedir": 29.586094579873215, "get_state_dump": 0.004999882176357146, "get_robot_state": 0.003975508115770021, "sim_render-ego0": 0.004110969770560157, "get_duckie_state": 1.9013931312529273e-06, "in-drivable-lane": 0.0, "deviation-heading": 9.77629830751269, "agent_compute-ego0": 0.020837608126180556, "complete-iteration": 0.19098477796352872, "set_robot_commands": 0.002515884561403705, "deviation-center-line": 4.091135310484281, "driven_lanedir_consec": 29.586094579873215, "sim_compute_sim_state": 0.006671325253209504, "sim_compute_performance-ego0": 0.002158200115486545}}
set_robot_commands_max0.002515884561403705
set_robot_commands_mean0.002515884561403705
set_robot_commands_median0.002515884561403705
set_robot_commands_min0.002515884561403705
sim_compute_performance-ego0_max0.002158200115486545
sim_compute_performance-ego0_mean0.002158200115486545
sim_compute_performance-ego0_median0.002158200115486545
sim_compute_performance-ego0_min0.002158200115486545
sim_compute_sim_state_max0.006671325253209504
sim_compute_sim_state_mean0.006671325253209504
sim_compute_sim_state_median0.006671325253209504
sim_compute_sim_state_min0.006671325253209504
sim_render-ego0_max0.004110969770560157
sim_render-ego0_mean0.004110969770560157
sim_render-ego0_median0.004110969770560157
sim_render-ego0_min0.004110969770560157
simulation-passed1
step_physics_max0.11757553050559724
step_physics_mean0.11757553050559724
step_physics_median0.11757553050559724
step_physics_min0.11757553050559724
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible