Duckietown Challenges Home Challenges Submissions

Submission 9324

Submission9324
Competingyes
Challengeaido5-LF-sim-validation
UserJerome Labonte 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58182
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58182

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58182LFv-simsuccessyes0:35:25
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median5.997831987137305
survival_time_median59.99999999999873
deviation-center-line_median3.5559461991381913
in-drivable-lane_median5.099999999999939


other stats
agent_compute-ego0_max0.01241492530289141
agent_compute-ego0_mean0.012179876346572253
agent_compute-ego0_median0.01222971625173221
agent_compute-ego0_min0.011845147579933183
complete-iteration_max0.1919915598695423
complete-iteration_mean0.17177475818884955
complete-iteration_median0.1742312499228167
complete-iteration_min0.1466449730402226
deviation-center-line_max4.1051585193641325
deviation-center-line_mean3.654893692948312
deviation-center-line_min3.40252385415273
deviation-heading_max15.692573404645463
deviation-heading_mean13.070870977946356
deviation-heading_median12.666823052985489
deviation-heading_min11.25726440116899
driven_any_max11.64934061731864
driven_any_mean10.548486250548851
driven_any_median10.424794929139852
driven_any_min9.695014526597056
driven_lanedir_consec_max10.568787139049386
driven_lanedir_consec_mean6.911574621618154
driven_lanedir_consec_min5.0818473731486185
driven_lanedir_max11.084740464613011
driven_lanedir_mean9.59015048041266
driven_lanedir_median9.698751990378335
driven_lanedir_min7.878357476280963
get_duckie_state_max1.6965238776036246e-06
get_duckie_state_mean1.572500557625522e-06
get_duckie_state_median1.5497207641601562e-06
get_duckie_state_min1.494036824578151e-06
get_robot_state_max0.003616178760322107
get_robot_state_mean0.003553326580546281
get_robot_state_median0.003542247461736649
get_robot_state_min0.0035126326383897208
get_state_dump_max0.00481247882064832
get_state_dump_mean0.0044549072811148945
get_state_dump_median0.004367851794113426
get_state_dump_min0.004271446715584405
get_ui_image_max0.03516555428008652
get_ui_image_mean0.030077159255867097
get_ui_image_median0.029952916872689965
get_ui_image_min0.02523724899800195
in-drivable-lane_max8.499999999999602
in-drivable-lane_mean4.67499999999987
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 11.64934061731864, "get_ui_image": 0.02786006359732419, "step_physics": 0.09443885638850812, "survival_time": 59.99999999999873, "driven_lanedir": 11.084740464613011, "get_state_dump": 0.00481247882064832, "get_robot_state": 0.003616178760322107, "sim_render-ego0": 0.0036675892305016817, "get_duckie_state": 1.6965238776036246e-06, "in-drivable-lane": 2.499999999999911, "deviation-heading": 11.25726440116899, "agent_compute-ego0": 0.012296450723716362, "complete-iteration": 0.16130425709669635, "set_robot_commands": 0.0022595730748998432, "deviation-center-line": 3.40252385415273, "driven_lanedir_consec": 6.30031527920289, "sim_compute_sim_state": 0.010358272841530576, "sim_compute_performance-ego0": 0.0019075326578106909}, "LF-norm-zigzag-000-ego0": {"driven_any": 10.028296249384864, "get_ui_image": 0.03516555428008652, "step_physics": 0.11775025559106932, "survival_time": 59.99999999999873, "driven_lanedir": 8.828716841707287, "get_state_dump": 0.004271446715584405, "get_robot_state": 0.0035178039989106165, "sim_render-ego0": 0.003602549396486306, "get_duckie_state": 1.5200425147216186e-06, "in-drivable-lane": 7.699999999999967, "deviation-heading": 13.981915488279936, "agent_compute-ego0": 0.012162981779748058, "complete-iteration": 0.1919915598695423, "set_robot_commands": 0.002134793009190238, "deviation-center-line": 3.431767073918256, "driven_lanedir_consec": 5.695348695071721, "sim_compute_sim_state": 0.011444057850516111, "sim_compute_performance-ego0": 0.00186466733978551}, "LF-norm-techtrack-000-ego0": {"driven_any": 9.695014526597056, "get_ui_image": 0.032045770148055736, "step_physics": 0.11516150407846724, "survival_time": 59.99999999999873, "driven_lanedir": 7.878357476280963, "get_state_dump": 0.004380506242343925, "get_robot_state": 0.003566690924562681, "sim_render-ego0": 0.0035959068285634772, "get_duckie_state": 1.579399013598694e-06, "in-drivable-lane": 8.499999999999602, "deviation-heading": 15.692573404645463, "agent_compute-ego0": 0.01241492530289141, "complete-iteration": 0.18715824274893705, "set_robot_commands": 0.002077374232003929, "deviation-center-line": 4.1051585193641325, "driven_lanedir_consec": 5.0818473731486185, "sim_compute_sim_state": 0.011996335530658249, "sim_compute_performance-ego0": 0.001840796299917712}, "LF-norm-small_loop-000-ego0": {"driven_any": 10.821293608894845, "get_ui_image": 0.02523724899800195, "step_physics": 0.08768562333569936, "survival_time": 59.99999999999873, "driven_lanedir": 10.568787139049386, "get_state_dump": 0.004355197345882927, "get_robot_state": 0.0035126326383897208, "sim_render-ego0": 0.0036656376126406095, "get_duckie_state": 1.494036824578151e-06, "in-drivable-lane": 0.0, "deviation-heading": 11.35173061769104, "agent_compute-ego0": 0.011845147579933183, "complete-iteration": 0.1466449730402226, "set_robot_commands": 0.002257261943261292, "deviation-center-line": 3.6801253243581264, "driven_lanedir_consec": 10.568787139049386, "sim_compute_sim_state": 0.006094653838679355, "sim_compute_performance-ego0": 0.0019131878035749425}}
set_robot_commands_max0.0022595730748998432
set_robot_commands_mean0.0021822505648388256
set_robot_commands_median0.002196027476225765
set_robot_commands_min0.002077374232003929
sim_compute_performance-ego0_max0.0019131878035749425
sim_compute_performance-ego0_mean0.0018815460252722136
sim_compute_performance-ego0_median0.0018860999987981005
sim_compute_performance-ego0_min0.001840796299917712
sim_compute_sim_state_max0.011996335530658249
sim_compute_sim_state_mean0.009973330015346072
sim_compute_sim_state_median0.010901165346023345
sim_compute_sim_state_min0.006094653838679355
sim_render-ego0_max0.0036675892305016817
sim_render-ego0_mean0.0036329207670480184
sim_render-ego0_median0.0036340935045634576
sim_render-ego0_min0.0035959068285634772
simulation-passed1
step_physics_max0.11775025559106932
step_physics_mean0.10375905984843602
step_physics_median0.10480018023348768
step_physics_min0.08768562333569936
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
52310LFv-simerrorno0:08:12
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140197553323216
- M:video_aido:cmdline(in:/;out:/) 140197374438512
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52304LFv-simerrorno0:09:29
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140556734416352
- M:video_aido:cmdline(in:/;out:/) 140555932944800
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41719LFv-simsuccessno0:09:40
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41718LFv-simsuccessno0:09:47
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38165LFv-simerrorno0:00:39
The container "evalu [...]
The container "evaluator" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.49-py3.8.egg/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/media/sdb/duckietown-tmp-evals/production/aido5-LF-sim-validation/submission9324/LFv-sim-mont02-80325a328f54-1-job38165-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38163LFv-simerrorno0:00:42
The container "evalu [...]
The container "evaluator" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.49-py3.8.egg/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/media/sdb/duckietown-tmp-evals/production/aido5-LF-sim-validation/submission9324/LFv-sim-mont04-e828c68b6a88-1-job38163-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36313LFv-simsuccessno0:10:58
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36312LFv-simerrorno0:00:51
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9324/LFv-sim-Sandy2-sandy-1-job36312-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35752LFv-simsuccessno0:00:59
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35748LFv-simsuccessno0:01:02
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35356LFv-simerrorno0:22:52
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9324/LFv-sim-reg05-b2dee9d94ee0-1-job35356:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9324/LFv-sim-reg05-b2dee9d94ee0-1-job35356/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9324/LFv-sim-reg05-b2dee9d94ee0-1-job35356/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9324/LFv-sim-reg05-b2dee9d94ee0-1-job35356/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9324/LFv-sim-reg05-b2dee9d94ee0-1-job35356/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9324/LFv-sim-reg05-b2dee9d94ee0-1-job35356/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34990LFv-simsuccessno0:23:25
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34681LFv-simsuccessno0:26:18
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible