Duckietown Challenges Home Challenges Submissions

Submission 10850

Submission10850
Competingyes
Challengeaido5-LF-sim-validation
UserAyman Shams 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 57743
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

57743

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
57743LFv-simsuccessyes0:08:14
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.8620068368358367
survival_time_median11.925000000000034
deviation-center-line_median0.2985942839385983
in-drivable-lane_median6.875000000000037


other stats
agent_compute-ego0_max0.01290924975901474
agent_compute-ego0_mean0.012595021201340297
agent_compute-ego0_median0.012807028903138978
agent_compute-ego0_min0.011856777240068485
complete-iteration_max0.1664436349162349
complete-iteration_mean0.1598495531094699
complete-iteration_median0.16557630568399315
complete-iteration_min0.14180196615365837
deviation-center-line_max0.420112173787609
deviation-center-line_mean0.30846525528511554
deviation-center-line_min0.2165602794756565
deviation-heading_max3.1924743994485394
deviation-heading_mean1.8525164082090375
deviation-heading_median1.504711131059237
deviation-heading_min1.2081689712691377
driven_any_max4.223608798731008
driven_any_mean2.958753305310407
driven_any_median3.0426036219635524
driven_any_min1.526197178583515
driven_lanedir_consec_max1.2205547124924916
driven_lanedir_consec_mean0.8660194101310439
driven_lanedir_consec_min0.5195092543600106
driven_lanedir_max1.2205547124924916
driven_lanedir_mean0.8660194101310439
driven_lanedir_median0.8620068368358367
driven_lanedir_min0.5195092543600106
get_duckie_state_max1.1660683322960222e-06
get_duckie_state_mean1.1210446449668575e-06
get_duckie_state_median1.1178302320289167e-06
get_duckie_state_min1.0824497835135754e-06
get_robot_state_max0.0035434780698834043
get_robot_state_mean0.003506043730888592
get_robot_state_median0.003514910113229024
get_robot_state_min0.0034508766272129156
get_state_dump_max0.004286989359788491
get_state_dump_mean0.0042526826225999965
get_state_dump_median0.004261945132856016
get_state_dump_min0.004199850864899464
get_ui_image_max0.03537060081222911
get_ui_image_mean0.03039324015704265
get_ui_image_median0.030192581815084635
get_ui_image_min0.025817196185772235
in-drivable-lane_max13.550000000000098
in-drivable-lane_mean7.625000000000041
in-drivable-lane_min3.1999999999999913
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 1.526197178583515, "get_ui_image": 0.028184739026156334, "step_physics": 0.10073999925093216, "survival_time": 6.549999999999985, "driven_lanedir": 0.7662314436335209, "get_state_dump": 0.004279792308807373, "get_robot_state": 0.0035434780698834043, "sim_render-ego0": 0.0036475062370300297, "get_duckie_state": 1.1108138344504617e-06, "in-drivable-lane": 3.1999999999999913, "deviation-heading": 1.2081689712691377, "agent_compute-ego0": 0.01285184874679103, "complete-iteration": 0.16567538001320578, "set_robot_commands": 0.0021209987727078524, "deviation-center-line": 0.35988982786013374, "driven_lanedir_consec": 0.7662314436335209, "sim_compute_sim_state": 0.00836369666186246, "sim_compute_performance-ego0": 0.0018708218227733264}, "LF-norm-zigzag-000-ego0": {"driven_any": 4.223608798731008, "get_ui_image": 0.03537060081222911, "step_physics": 0.0922155291945846, "survival_time": 16.150000000000095, "driven_lanedir": 0.5195092543600106, "get_state_dump": 0.0042440979569046584, "get_robot_state": 0.003525163656399574, "sim_render-ego0": 0.003680677325637252, "get_duckie_state": 1.0824497835135754e-06, "in-drivable-lane": 13.550000000000098, "deviation-heading": 1.5102257155019092, "agent_compute-ego0": 0.01290924975901474, "complete-iteration": 0.1664436349162349, "set_robot_commands": 0.0021812702402656463, "deviation-center-line": 0.2165602794756565, "driven_lanedir_consec": 0.5195092543600106, "sim_compute_sim_state": 0.010380691216315753, "sim_compute_performance-ego0": 0.0018647857654241868}, "LF-norm-techtrack-000-ego0": {"driven_any": 3.664550860808927, "get_ui_image": 0.032200424604012935, "step_physics": 0.09375303060236112, "survival_time": 14.150000000000066, "driven_lanedir": 0.9577822300381524, "get_state_dump": 0.004286989359788491, "get_robot_state": 0.0035046565700584736, "sim_render-ego0": 0.00366706327653267, "get_duckie_state": 1.1660683322960222e-06, "in-drivable-lane": 9.100000000000062, "deviation-heading": 3.1924743994485394, "agent_compute-ego0": 0.012762209059486926, "complete-iteration": 0.1654772313547806, "set_robot_commands": 0.00207336855606294, "deviation-center-line": 0.420112173787609, "driven_lanedir_consec": 0.9577822300381524, "sim_compute_sim_state": 0.011302539160553838, "sim_compute_performance-ego0": 0.0018537884027185576}, "LF-norm-small_loop-000-ego0": {"driven_any": 2.420656383118177, "get_ui_image": 0.025817196185772235, "step_physics": 0.08349543106861604, "survival_time": 9.700000000000005, "driven_lanedir": 1.2205547124924916, "get_state_dump": 0.004199850864899464, "get_robot_state": 0.0034508766272129156, "sim_render-ego0": 0.003526916259374374, "get_duckie_state": 1.1248466296073718e-06, "in-drivable-lane": 4.650000000000013, "deviation-heading": 1.4991965466165649, "agent_compute-ego0": 0.011856777240068485, "complete-iteration": 0.14180196615365837, "set_robot_commands": 0.002033835190993089, "deviation-center-line": 0.2372987400170629, "driven_lanedir_consec": 1.2205547124924916, "sim_compute_sim_state": 0.0055206922384408805, "sim_compute_performance-ego0": 0.0018288025489220253}}
set_robot_commands_max0.0021812702402656463
set_robot_commands_mean0.002102368190007382
set_robot_commands_median0.0020971836643853964
set_robot_commands_min0.002033835190993089
sim_compute_performance-ego0_max0.0018708218227733264
sim_compute_performance-ego0_mean0.001854549634959524
sim_compute_performance-ego0_median0.001859287084071372
sim_compute_performance-ego0_min0.0018288025489220253
sim_compute_sim_state_max0.011302539160553838
sim_compute_sim_state_mean0.008891904819293233
sim_compute_sim_state_median0.009372193939089106
sim_compute_sim_state_min0.0055206922384408805
sim_render-ego0_max0.003680677325637252
sim_render-ego0_mean0.0036305407746435817
sim_render-ego0_median0.0036572847567813496
sim_render-ego0_min0.003526916259374374
simulation-passed1
step_physics_max0.10073999925093216
step_physics_mean0.09255099752912348
step_physics_median0.09298427989847284
step_physics_min0.08349543106861604
survival_time_max16.150000000000095
survival_time_mean11.637500000000038
survival_time_min6.549999999999985
No reset possible
51494LFv-simerrorno0:03:12
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140108199745376
- M:video_aido:cmdline(in:/;out:/) 140108199746576
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
40849LFv-simsuccessno0:06:00
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
40831LFv-simtimeoutno----No reset possible
40829LFv-simtimeoutno0:11:20
Timeout because eval [...]
Timeout because evaluator contacted us
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38139LFv-simsuccessno0:06:22
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38135LFv-simerrorno0:00:43
The container "evalu [...]
The container "evaluator" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.49-py3.8.egg/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/media/sdb/duckietown-tmp-evals/production/aido5-LF-sim-validation/submission10850/LFv-sim-mont03-cfb9f976bc49-1-job38135-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible