Duckietown Challenges Home Challenges Submissions

Submission 11485

Submission11485
Competingyes
Challengeaido5-LF-sim-validation
UserRaphael Jean
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 54502
Next
User labelreal-exercise-2
Admin priority50
Blessingn/a
User priority50

54502

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
54502LFv-simsuccessyes0:37:16
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median7.140142126770781
survival_time_median59.99999999999873
deviation-center-line_median3.894042142494184
in-drivable-lane_median3.899999999999902


other stats
agent_compute-ego0_max0.012652621678170514
agent_compute-ego0_mean0.0124908994933573
agent_compute-ego0_median0.012471914303297752
agent_compute-ego0_min0.012367147688663175
complete-iteration_max0.3072422996605977
complete-iteration_mean0.25064042760615185
complete-iteration_median0.2491388627631182
complete-iteration_min0.19704168523777335
deviation-center-line_max4.823563709255509
deviation-center-line_mean3.8740629724087974
deviation-center-line_min2.8846038953913107
deviation-heading_max23.623667018513903
deviation-heading_mean20.881984091519048
deviation-heading_median20.838847091667965
deviation-heading_min18.22657516422635
driven_any_max11.966252880527197
driven_any_mean10.652787158302552
driven_any_median11.684935865845109
driven_any_min7.275024020992793
driven_lanedir_consec_max9.620531095048392
driven_lanedir_consec_mean7.514153291664028
driven_lanedir_consec_min6.155797818066159
driven_lanedir_max10.03209579894228
driven_lanedir_mean8.818426956863881
driven_lanedir_median9.54290710522354
driven_lanedir_min6.155797818066159
get_duckie_state_max1.3706117573350962e-06
get_duckie_state_mean1.3179655780731383e-06
get_duckie_state_median1.3228161547404345e-06
get_duckie_state_min1.2556182454765885e-06
get_robot_state_max0.0038808303894382894
get_robot_state_mean0.0037757543547423866
get_robot_state_median0.0037729399686649776
get_robot_state_min0.0036763070922013023
get_state_dump_max0.005253639546758825
get_state_dump_mean0.004813895322689991
get_state_dump_median0.004722366892608753
get_state_dump_min0.004557207958783635
get_ui_image_max0.038060553003065656
get_ui_image_mean0.03158086201696309
get_ui_image_median0.030888824736843696
get_ui_image_min0.026485245591099316
in-drivable-lane_max5.699999999999909
in-drivable-lane_mean3.5624999999999307
in-drivable-lane_min0.7500000000000107
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 11.442121395708066, "get_ui_image": 0.02877612733324799, "step_physics": 0.16849450167767907, "survival_time": 59.99999999999873, "driven_lanedir": 9.620531095048392, "get_state_dump": 0.004623080371917039, "get_robot_state": 0.003728058018553366, "sim_render-ego0": 0.0037694568935778615, "get_duckie_state": 1.3284738812220286e-06, "in-drivable-lane": 2.8999999999999417, "deviation-heading": 23.623667018513903, "agent_compute-ego0": 0.012367147688663175, "complete-iteration": 0.2354526289495997, "set_robot_commands": 0.0022491312940154445, "deviation-center-line": 3.86633263040639, "driven_lanedir_consec": 9.620531095048392, "sim_compute_sim_state": 0.009306246394618762, "sim_compute_performance-ego0": 0.002051770736732451}, "LF-norm-zigzag-000-ego0": {"driven_any": 7.275024020992793, "get_ui_image": 0.038060553003065656, "step_physics": 0.227101465853134, "survival_time": 40.349999999999845, "driven_lanedir": 6.155797818066159, "get_state_dump": 0.004821653413300467, "get_robot_state": 0.0038808303894382894, "sim_render-ego0": 0.003927294865693196, "get_duckie_state": 1.3706117573350962e-06, "in-drivable-lane": 0.7500000000000107, "deviation-heading": 18.22657516422635, "agent_compute-ego0": 0.012517655544941967, "complete-iteration": 0.3072422996605977, "set_robot_commands": 0.002317489078729459, "deviation-center-line": 2.8846038953913107, "driven_lanedir_consec": 6.155797818066159, "sim_compute_sim_state": 0.01237671416584808, "sim_compute_performance-ego0": 0.00214942463553778}, "LF-norm-techtrack-000-ego0": {"driven_any": 11.966252880527197, "get_ui_image": 0.0330015221404394, "step_physics": 0.19020976432654185, "survival_time": 59.99999999999873, "driven_lanedir": 9.465283115398693, "get_state_dump": 0.004557207958783635, "get_robot_state": 0.0036763070922013023, "sim_render-ego0": 0.003745247978254917, "get_duckie_state": 1.2556182454765885e-06, "in-drivable-lane": 5.699999999999909, "deviation-heading": 22.647044738509948, "agent_compute-ego0": 0.012426173061653535, "complete-iteration": 0.26282509657663666, "set_robot_commands": 0.0022104618253557013, "deviation-center-line": 3.921751654581978, "driven_lanedir_consec": 7.078545240549768, "sim_compute_sim_state": 0.01092416023235337, "sim_compute_performance-ego0": 0.0019898152569747785}, "LF-norm-small_loop-000-ego0": {"driven_any": 11.927750335982152, "get_ui_image": 0.026485245591099316, "step_physics": 0.13399662165518705, "survival_time": 59.99999999999873, "driven_lanedir": 10.03209579894228, "get_state_dump": 0.005253639546758825, "get_robot_state": 0.003817821918776589, "sim_render-ego0": 0.003866217118516552, "get_duckie_state": 1.3171584282588404e-06, "in-drivable-lane": 4.899999999999863, "deviation-heading": 19.030649444825983, "agent_compute-ego0": 0.012652621678170514, "complete-iteration": 0.19704168523777335, "set_robot_commands": 0.0022886700673861666, "deviation-center-line": 4.823563709255509, "driven_lanedir_consec": 7.201739012991794, "sim_compute_sim_state": 0.00647852561753755, "sim_compute_performance-ego0": 0.002117149438786566}}
set_robot_commands_max0.002317489078729459
set_robot_commands_mean0.002266438066371693
set_robot_commands_median0.0022689006807008055
set_robot_commands_min0.0022104618253557013
sim_compute_performance-ego0_max0.00214942463553778
sim_compute_performance-ego0_mean0.002077040017007894
sim_compute_performance-ego0_median0.0020844600877595086
sim_compute_performance-ego0_min0.0019898152569747785
sim_compute_sim_state_max0.01237671416584808
sim_compute_sim_state_mean0.00977141160258944
sim_compute_sim_state_median0.010115203313486068
sim_compute_sim_state_min0.00647852561753755
sim_render-ego0_max0.003927294865693196
sim_render-ego0_mean0.003827054214010632
sim_render-ego0_median0.0038178370060472063
sim_render-ego0_min0.003745247978254917
simulation-passed1
step_physics_max0.227101465853134
step_physics_mean0.1799505883781355
step_physics_median0.17935213300211045
step_physics_min0.13399662165518705
survival_time_max59.99999999999873
survival_time_mean55.08749999999901
survival_time_min40.349999999999845
No reset possible
54497LFv-simsuccessyes0:40:59
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
54496LFv-simsuccessyes0:24:39
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49653LFv-simerrorno0:05:38
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140190427456560
- M:video_aido:cmdline(in:/;out:/) 140190427458768
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
42553LFv-simsuccessno0:11:13
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible