Duckietown Challenges Home Challenges Submissions

Submission 9255

Submission9255
Competingyes
Challengeaido5-LF-sim-validation
UserJerome Labonte 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58464
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58464

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58464LFv-simsuccessyes0:30:53
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median6.90973020276577
survival_time_median59.99999999999873
deviation-center-line_median3.653488213488563
in-drivable-lane_median3.124999999999913


other stats
agent_compute-ego0_max0.013101722278960242
agent_compute-ego0_mean0.012883695866303134
agent_compute-ego0_median0.012943441788536526
agent_compute-ego0_min0.01254617760917924
complete-iteration_max0.2143140952851155
complete-iteration_mean0.1850799464633769
complete-iteration_median0.1822185486719368
complete-iteration_min0.1615685932245183
deviation-center-line_max4.129043031955287
deviation-center-line_mean3.268425385313492
deviation-center-line_min1.6376820823215548
deviation-heading_max19.106087575170715
deviation-heading_mean11.968858912280428
deviation-heading_median11.269202423626776
deviation-heading_min6.230943226697443
driven_any_max12.125958105800242
driven_any_mean8.918270907805367
driven_any_median9.807232750025149
driven_any_min3.932660025370928
driven_lanedir_consec_max11.748658582789458
driven_lanedir_consec_mean7.207295131600916
driven_lanedir_consec_min3.2610615380826644
driven_lanedir_max11.748658582789458
driven_lanedir_mean8.28063082147185
driven_lanedir_median9.050339843128151
driven_lanedir_min3.273185016841635
get_duckie_state_max1.5833693479717425e-06
get_duckie_state_mean1.5101486711527471e-06
get_duckie_state_median1.5345982199107452e-06
get_duckie_state_min1.388028896817756e-06
get_robot_state_max0.004087995033677075
get_robot_state_mean0.003875225495591072
get_robot_state_median0.003842781993677215
get_robot_state_min0.00372734296133278
get_state_dump_max0.005101583680939813
get_state_dump_mean0.004956541002683074
get_state_dump_median0.00496673145032738
get_state_dump_min0.004791117429137726
get_ui_image_max0.037129590751527254
get_ui_image_mean0.031152512659078
get_ui_image_median0.03035044501591357
get_ui_image_min0.026779569852957617
in-drivable-lane_max3.5000000000000355
in-drivable-lane_mean2.8624999999999714
in-drivable-lane_min1.7000000000000242
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 12.125958105800242, "get_ui_image": 0.028943443179229814, "step_physics": 0.10347260960333551, "survival_time": 59.99999999999873, "driven_lanedir": 11.748658582789458, "get_state_dump": 0.005029663654489382, "get_robot_state": 0.0038747932392790554, "sim_render-ego0": 0.004014793383291024, "get_duckie_state": 1.496816058639285e-06, "in-drivable-lane": 1.7000000000000242, "deviation-heading": 8.655546972646226, "agent_compute-ego0": 0.01307113422740012, "complete-iteration": 0.17281216824680046, "set_robot_commands": 0.0023068806015383095, "deviation-center-line": 3.364791070243221, "driven_lanedir_consec": 11.748658582789458, "sim_compute_sim_state": 0.009806687587703098, "sim_compute_performance-ego0": 0.002192325933489772}, "LF-norm-zigzag-000-ego0": {"driven_any": 9.40271898745405, "get_ui_image": 0.037129590751527254, "step_physics": 0.13394319762993018, "survival_time": 59.99999999999873, "driven_lanedir": 8.49121372275058, "get_state_dump": 0.005101583680939813, "get_robot_state": 0.004087995033677075, "sim_render-ego0": 0.004165697058074977, "get_duckie_state": 1.5833693479717425e-06, "in-drivable-lane": 3.0999999999998984, "deviation-heading": 19.106087575170715, "agent_compute-ego0": 0.013101722278960242, "complete-iteration": 0.2143140952851155, "set_robot_commands": 0.0024039598428438743, "deviation-center-line": 4.129043031955287, "driven_lanedir_consec": 5.793953531456659, "sim_compute_sim_state": 0.011942933342240437, "sim_compute_performance-ego0": 0.002335228788961876}, "LF-norm-techtrack-000-ego0": {"driven_any": 3.932660025370928, "get_ui_image": 0.03175744685259732, "step_physics": 0.11915145481913544, "survival_time": 24.150000000000208, "driven_lanedir": 3.273185016841635, "get_state_dump": 0.004903799246165378, "get_robot_state": 0.003810770748075375, "sim_render-ego0": 0.003871083259582519, "get_duckie_state": 1.5723803811822055e-06, "in-drivable-lane": 3.5000000000000355, "deviation-heading": 6.230943226697443, "agent_compute-ego0": 0.012815749349672933, "complete-iteration": 0.19162492909707313, "set_robot_commands": 0.0022959068786999413, "deviation-center-line": 1.6376820823215548, "driven_lanedir_consec": 3.2610615380826644, "sim_compute_sim_state": 0.010845384321922114, "sim_compute_performance-ego0": 0.002074668230104052}, "LF-norm-small_loop-000-ego0": {"driven_any": 10.211746512596248, "get_ui_image": 0.026779569852957617, "step_physics": 0.09923003932816302, "survival_time": 59.99999999999873, "driven_lanedir": 9.609465963505723, "get_state_dump": 0.004791117429137726, "get_robot_state": 0.00372734296133278, "sim_render-ego0": 0.0038425860853616048, "get_duckie_state": 1.388028896817756e-06, "in-drivable-lane": 3.1499999999999275, "deviation-heading": 13.882857874607328, "agent_compute-ego0": 0.01254617760917924, "complete-iteration": 0.1615685932245183, "set_robot_commands": 0.0022376850185346644, "deviation-center-line": 3.942185356733906, "driven_lanedir_consec": 8.025506874074882, "sim_compute_sim_state": 0.006294693180563845, "sim_compute_performance-ego0": 0.002027747831574884}}
set_robot_commands_max0.0024039598428438743
set_robot_commands_mean0.0023111080854041973
set_robot_commands_median0.002301393740119126
set_robot_commands_min0.0022376850185346644
sim_compute_performance-ego0_max0.002335228788961876
sim_compute_performance-ego0_mean0.002157492696032646
sim_compute_performance-ego0_median0.002133497081796912
sim_compute_performance-ego0_min0.002027747831574884
sim_compute_sim_state_max0.011942933342240437
sim_compute_sim_state_mean0.009722424608107372
sim_compute_sim_state_median0.010326035954812605
sim_compute_sim_state_min0.006294693180563845
sim_render-ego0_max0.004165697058074977
sim_render-ego0_mean0.003973539946577531
sim_render-ego0_median0.003942938321436771
sim_render-ego0_min0.0038425860853616048
simulation-passed1
step_physics_max0.13394319762993018
step_physics_mean0.11394932534514104
step_physics_median0.11131203221123548
step_physics_min0.09923003932816302
survival_time_max59.99999999999873
survival_time_mean51.0374999999991
survival_time_min24.150000000000208
No reset possible
58463LFv-simsuccessyes0:34:57
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52401LFv-simerrorno0:06:49
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140119265674240
- M:video_aido:cmdline(in:/;out:/) 140119265673280
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52389LFv-simhost-errorno0:08:45
The container "evalu [...]
The container "evaluator" exited with code 1.


Look at the logs for the container to know more about the error.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41772LFv-simsuccessno0:09:35
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38288LFv-simsuccessno0:09:19
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36390LFv-simsuccessno0:10:10
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35821LFv-simsuccessno0:01:05
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35410LFv-simerrorno0:22:38
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9255/LFv-sim-reg02-1b92df2e7e91-1-job35410:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9255/LFv-sim-reg02-1b92df2e7e91-1-job35410/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9255/LFv-sim-reg02-1b92df2e7e91-1-job35410/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9255/LFv-sim-reg02-1b92df2e7e91-1-job35410/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9255/LFv-sim-reg02-1b92df2e7e91-1-job35410/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9255/LFv-sim-reg02-1b92df2e7e91-1-job35410/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35053LFv-simsuccessno0:24:02
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34546LFv-simsuccessno0:22:28
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34545LFv-simsuccessno0:22:09
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible