Duckietown Challenges Home Challenges Submissions

Submission 11728

Submission11728
Competingyes
Challengeaido5-LF-sim-validation
UserYishu Malhotra 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 53907
Next
User labelsim-exercise-2
Admin priority50
Blessingn/a
User priority50

53907

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
53907LFv-simsuccessyes0:36:51
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median3.4253230643298624
survival_time_median59.99999999999873
deviation-center-line_median1.606181708822092
in-drivable-lane_median32.24999999999908


other stats
agent_compute-ego0_max0.013257928930849556
agent_compute-ego0_mean0.013033529846117898
agent_compute-ego0_median0.013043927799989426
agent_compute-ego0_min0.012788334853643184
complete-iteration_max0.23135550770533272
complete-iteration_mean0.2056561500342251
complete-iteration_median0.2077093400996864
complete-iteration_min0.1758504122321949
deviation-center-line_max3.627426986229792
deviation-center-line_mean1.8593475496218468
deviation-center-line_min0.5975997946134114
deviation-heading_max14.41396246805705
deviation-heading_mean8.125207821365578
deviation-heading_median6.713644344575366
deviation-heading_min4.659580128254529
driven_any_max10.708362043937294
driven_any_mean9.754408859252308
driven_any_median9.646479760261752
driven_any_min9.01631387254843
driven_lanedir_consec_max9.699065722084054
driven_lanedir_consec_mean4.452843725573148
driven_lanedir_consec_min1.2616630515488148
driven_lanedir_max9.699065722084054
driven_lanedir_mean4.887990030564388
driven_lanedir_median4.295615674312341
driven_lanedir_min1.2616630515488148
get_duckie_state_max1.5355156533938573e-06
get_duckie_state_mean1.4777060331233263e-06
get_duckie_state_median1.4865924476286851e-06
get_duckie_state_min1.4021235838420782e-06
get_robot_state_max0.00417046892404329
get_robot_state_mean0.004047043645236939
get_robot_state_median0.004049280898755635
get_robot_state_min0.003919143859393194
get_state_dump_max0.005244715029155087
get_state_dump_mean0.005172560440014754
get_state_dump_median0.005184467755387792
get_state_dump_min0.005076591220128348
get_ui_image_max0.036926978632969024
get_ui_image_mean0.03224758855944446
get_ui_image_median0.03246677559558151
get_ui_image_min0.027129824413645778
in-drivable-lane_max50.49999999999877
in-drivable-lane_mean29.8499999999992
in-drivable-lane_min4.399999999999846
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 9.341055265480351, "get_ui_image": 0.030262815713655165, "step_physics": 0.12328745730157348, "survival_time": 52.39999999999916, "driven_lanedir": 6.186432790823307, "get_state_dump": 0.005220426844914148, "get_robot_state": 0.00417046892404329, "sim_render-ego0": 0.0042286569215321795, "get_duckie_state": 1.5355156533938573e-06, "in-drivable-lane": 21.599999999999547, "deviation-heading": 7.167097243079365, "agent_compute-ego0": 0.01308944023030502, "complete-iteration": 0.19664579578759672, "set_robot_commands": 0.002486664187464973, "deviation-center-line": 1.988372353259797, "driven_lanedir_consec": 4.44584757085835, "sim_compute_sim_state": 0.011427769101836549, "sim_compute_performance-ego0": 0.0023697729447094795}, "LF-norm-zigzag-000-ego0": {"driven_any": 10.708362043937294, "get_ui_image": 0.036926978632969024, "step_physics": 0.14973358349637325, "survival_time": 59.99999999999873, "driven_lanedir": 9.699065722084054, "get_state_dump": 0.005076591220128348, "get_robot_state": 0.003960652911196541, "sim_render-ego0": 0.004140528512934066, "get_duckie_state": 1.4021235838420782e-06, "in-drivable-lane": 4.399999999999846, "deviation-heading": 14.41396246805705, "agent_compute-ego0": 0.012788334853643184, "complete-iteration": 0.23135550770533272, "set_robot_commands": 0.0023729926243511268, "deviation-center-line": 3.627426986229792, "driven_lanedir_consec": 9.699065722084054, "sim_compute_sim_state": 0.014017285554236316, "sim_compute_performance-ego0": 0.0022367406745040347}, "LF-norm-techtrack-000-ego0": {"driven_any": 9.01631387254843, "get_ui_image": 0.03467073547750786, "step_physics": 0.14102637122612413, "survival_time": 59.99999999999873, "driven_lanedir": 1.2616630515488148, "get_state_dump": 0.005244715029155087, "get_robot_state": 0.004137908886314729, "sim_render-ego0": 0.004176725455862993, "get_duckie_state": 1.5184543809723994e-06, "in-drivable-lane": 50.49999999999877, "deviation-heading": 4.659580128254529, "agent_compute-ego0": 0.013257928930849556, "complete-iteration": 0.2187728844117761, "set_robot_commands": 0.0024526728678503995, "deviation-center-line": 0.5975997946134114, "driven_lanedir_consec": 1.2616630515488148, "sim_compute_sim_state": 0.01140979148267608, "sim_compute_performance-ego0": 0.00229026058333601}, "LF-norm-small_loop-000-ego0": {"driven_any": 9.95190425504315, "get_ui_image": 0.027129824413645778, "step_physics": 0.11146707121875264, "survival_time": 59.99999999999873, "driven_lanedir": 2.4047985578013753, "get_state_dump": 0.005148508665861436, "get_robot_state": 0.003919143859393194, "sim_render-ego0": 0.003998797700168886, "get_duckie_state": 1.4547305142849709e-06, "in-drivable-lane": 42.89999999999862, "deviation-heading": 6.260191446071367, "agent_compute-ego0": 0.01299841536967383, "complete-iteration": 0.1758504122321949, "set_robot_commands": 0.002340286994953139, "deviation-center-line": 1.223991064384387, "driven_lanedir_consec": 2.4047985578013753, "sim_compute_sim_state": 0.006616599156794997, "sim_compute_performance-ego0": 0.0021320505007220544}}
set_robot_commands_max0.002486664187464973
set_robot_commands_mean0.0024131541686549097
set_robot_commands_median0.002412832746100763
set_robot_commands_min0.002340286994953139
sim_compute_performance-ego0_max0.0023697729447094795
sim_compute_performance-ego0_mean0.0022572061758178944
sim_compute_performance-ego0_median0.0022635006289200223
sim_compute_performance-ego0_min0.0021320505007220544
sim_compute_sim_state_max0.014017285554236316
sim_compute_sim_state_mean0.010867861323885983
sim_compute_sim_state_median0.011418780292256313
sim_compute_sim_state_min0.006616599156794997
sim_render-ego0_max0.0042286569215321795
sim_render-ego0_mean0.004136177147624531
sim_render-ego0_median0.00415862698439853
sim_render-ego0_min0.003998797700168886
simulation-passed1
step_physics_max0.14973358349637325
step_physics_mean0.1313786208107059
step_physics_median0.13215691426384882
step_physics_min0.11146707121875264
survival_time_max59.99999999999873
survival_time_mean58.09999999999883
survival_time_min52.39999999999916
No reset possible
48793LFv-simerrorno0:09:12
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140537166558784
- M:video_aido:cmdline(in:/;out:/) 140537166634624
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
43745LFv-simsuccessno0:05:52
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
43744LFv-simsuccessno0:08:43
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
43743LFv-simsuccessno0:08:44
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible