Duckietown Challenges Home Challenges Submissions

Submission 10713

Submission10713
Competingyes
Challengeaido5-LF-sim-validation
UserRaphael Jean
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 57977
Next
User labelsim-exercise-1
Admin priority50
Blessingn/a
User priority50

57977

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
57977LFv-simsuccessyes0:27:41
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median6.37193555633125
survival_time_median52.72499999999914
deviation-center-line_median2.1173915993573
in-drivable-lane_median4.925000000000049


other stats
agent_compute-ego0_max0.01218677103072777
agent_compute-ego0_mean0.012046555248147942
agent_compute-ego0_median0.012018484551210963
agent_compute-ego0_min0.011962480859442072
complete-iteration_max0.19146170763052275
complete-iteration_mean0.17117259652190178
complete-iteration_median0.17235044362259555
complete-iteration_min0.14852779121189327
deviation-center-line_max3.957915484310653
deviation-center-line_mean2.292644375404266
deviation-center-line_min0.9778788185918108
deviation-heading_max14.237983876061657
deviation-heading_mean9.384145658243924
deviation-heading_median9.581999285327472
deviation-heading_min4.134600186259103
driven_any_max10.219778460456116
driven_any_mean7.53144830241248
driven_any_median7.494314542622262
driven_any_min4.91738566394928
driven_lanedir_consec_max9.872477800619889
driven_lanedir_consec_mean6.458222569481856
driven_lanedir_consec_min3.216541364645036
driven_lanedir_max9.872477800619889
driven_lanedir_mean6.5412959215335515
driven_lanedir_median6.37193555633125
driven_lanedir_min3.5488347728518175
get_duckie_state_max1.1770056248902282e-06
get_duckie_state_mean1.1376587028240797e-06
get_duckie_state_median1.1658106999347933e-06
get_duckie_state_min1.042007786536503e-06
get_robot_state_max0.0035405052669256875
get_robot_state_mean0.0035105549744971916
get_robot_state_median0.003506208141082332
get_robot_state_min0.003489298348898416
get_state_dump_max0.004341624757828661
get_state_dump_mean0.004329373010754109
get_state_dump_median0.004330189066452163
get_state_dump_min0.004315489152283448
get_ui_image_max0.03524576912910912
get_ui_image_mean0.029968286149199615
get_ui_image_median0.02971772648274302
get_ui_image_min0.025191922502203305
in-drivable-lane_max20.59999999999956
in-drivable-lane_mean7.612499999999915
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 4.91738566394928, "get_ui_image": 0.027649355589014383, "step_physics": 0.09894354797350197, "survival_time": 29.10000000000028, "driven_lanedir": 3.8910485123772705, "get_state_dump": 0.004315489152283448, "get_robot_state": 0.0035405052669256875, "sim_render-ego0": 0.0035916053614935377, "get_duckie_state": 1.042007786536503e-06, "in-drivable-lane": 9.250000000000131, "deviation-heading": 4.134600186259103, "agent_compute-ego0": 0.012047536990654816, "complete-iteration": 0.16473843384770545, "set_robot_commands": 0.0020850845716420823, "deviation-center-line": 0.9778788185918108, "driven_lanedir_consec": 3.8910485123772705, "sim_compute_sim_state": 0.010611570106362395, "sim_compute_performance-ego0": 0.00187647158653822}, "LF-norm-zigzag-000-ego0": {"driven_any": 9.236272915366785, "get_ui_image": 0.03524576912910912, "step_physics": 0.11571147956022317, "survival_time": 59.99999999999873, "driven_lanedir": 8.85282260028523, "get_state_dump": 0.004341624757828661, "get_robot_state": 0.0035150327055182284, "sim_render-ego0": 0.0036091367767613495, "get_duckie_state": 1.1770056248902282e-06, "in-drivable-lane": 0.0, "deviation-heading": 14.237983876061657, "agent_compute-ego0": 0.01198943211176711, "complete-iteration": 0.19146170763052275, "set_robot_commands": 0.002154452318355106, "deviation-center-line": 3.957915484310653, "driven_lanedir_consec": 8.85282260028523, "sim_compute_sim_state": 0.012950825552261442, "sim_compute_performance-ego0": 0.001865556694685073}, "LF-norm-techtrack-000-ego0": {"driven_any": 10.219778460456116, "get_ui_image": 0.03178609737647165, "step_physics": 0.10730705074624754, "survival_time": 59.99999999999873, "driven_lanedir": 9.872477800619889, "get_state_dump": 0.004329781051877138, "get_robot_state": 0.0034973835766464348, "sim_render-ego0": 0.00363086999008598, "get_duckie_state": 1.1565584028690284e-06, "in-drivable-lane": 0.5999999999999659, "deviation-heading": 11.788204031481849, "agent_compute-ego0": 0.01218677103072777, "complete-iteration": 0.17996245339748565, "set_robot_commands": 0.002109052934416327, "deviation-center-line": 2.8154760996881687, "driven_lanedir_consec": 9.872477800619889, "sim_compute_sim_state": 0.013163883620555155, "sim_compute_performance-ego0": 0.00187610329239692}, "LF-norm-small_loop-000-ego0": {"driven_any": 5.752356169877739, "get_ui_image": 0.025191922502203305, "step_physics": 0.08960934811896021, "survival_time": 45.449999999999555, "driven_lanedir": 3.5488347728518175, "get_state_dump": 0.004330597081027188, "get_robot_state": 0.003489298348898416, "sim_render-ego0": 0.003580871257153186, "get_duckie_state": 1.175062997000558e-06, "in-drivable-lane": 20.59999999999956, "deviation-heading": 7.375794539173095, "agent_compute-ego0": 0.011962480859442072, "complete-iteration": 0.14852779121189327, "set_robot_commands": 0.0020326493860601067, "deviation-center-line": 1.419307099026431, "driven_lanedir_consec": 3.216541364645036, "sim_compute_sim_state": 0.006404692786080497, "sim_compute_performance-ego0": 0.0018491040219317423}}
set_robot_commands_max0.002154452318355106
set_robot_commands_mean0.0020953098026184054
set_robot_commands_median0.0020970687530292046
set_robot_commands_min0.0020326493860601067
sim_compute_performance-ego0_max0.00187647158653822
sim_compute_performance-ego0_mean0.001866808898887989
sim_compute_performance-ego0_median0.0018708299935409965
sim_compute_performance-ego0_min0.0018491040219317423
sim_compute_sim_state_max0.013163883620555155
sim_compute_sim_state_mean0.010782743016314874
sim_compute_sim_state_median0.01178119782931192
sim_compute_sim_state_min0.006404692786080497
sim_render-ego0_max0.00363086999008598
sim_render-ego0_mean0.0036031208463735134
sim_render-ego0_median0.0036003710691274436
sim_render-ego0_min0.003580871257153186
simulation-passed1
step_physics_max0.11571147956022317
step_physics_mean0.1028928565997332
step_physics_median0.10312529935987474
step_physics_min0.08960934811896021
survival_time_max59.99999999999873
survival_time_mean48.63749999999932
survival_time_min29.10000000000028
No reset possible
57974LFv-simsuccessyes0:24:40
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
57972LFv-simsuccessyes0:33:44
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
51848LFv-simerrorno0:09:29
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140711243932720
- M:video_aido:cmdline(in:/;out:/) 140711243933776
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41057LFv-simsuccessno0:06:51
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41056LFv-simsuccessno0:07:59
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36643LFv-simsuccessno0:10:12
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible