Duckietown Challenges Home Challenges Submissions

Submission 9303

Submission9303
Competingyes
Challengeaido5-LF-sim-validation
UserJerome Labonte 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58347
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58347

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58347LFv-simsuccessyes0:22:59
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median4.164378997688279
survival_time_median45.19999999999951
deviation-center-line_median2.1138685602482097
in-drivable-lane_median11.150000000000023


other stats
agent_compute-ego0_max0.012584614614761442
agent_compute-ego0_mean0.01236872739469434
agent_compute-ego0_median0.012426290205002518
agent_compute-ego0_min0.01203771455401088
complete-iteration_max0.22764010578149255
complete-iteration_mean0.19389673508005023
complete-iteration_median0.19525415955649375
complete-iteration_min0.157438515425721
deviation-center-line_max4.051366195135703
deviation-center-line_mean2.1156310288990996
deviation-center-line_min0.18342079996427588
deviation-heading_max17.708927742360775
deviation-heading_mean10.201934412537677
deviation-heading_median11.053040060944916
deviation-heading_min0.9927297859000894
driven_any_max13.473174627929213
driven_any_mean8.28601329557017
driven_any_median9.280012448304202
driven_any_min1.1108536577430677
driven_lanedir_consec_max8.425935209053145
driven_lanedir_consec_mean4.3602817109404866
driven_lanedir_consec_min0.6864336393322421
driven_lanedir_max11.573141151652914
driven_lanedir_mean5.750686334641518
driven_lanedir_median5.369350676418787
driven_lanedir_min0.6909028340755847
get_duckie_state_max1.2953215892070737e-06
get_duckie_state_mean1.262512783827384e-06
get_duckie_state_median1.2701608055856754e-06
get_duckie_state_min1.2144079349311115e-06
get_robot_state_max0.003700020906827928
get_robot_state_mean0.003596391206305974
get_robot_state_median0.0035718842946160347
get_robot_state_min0.0035417753291639006
get_state_dump_max0.0046791532057508835
get_state_dump_mean0.004543469096339031
get_state_dump_median0.004532965844363921
get_state_dump_min0.004428791490877399
get_ui_image_max0.03654651453929582
get_ui_image_mean0.03118955869290851
get_ui_image_median0.03101549408535774
get_ui_image_min0.02618073206162274
in-drivable-lane_max20.549999999999297
in-drivable-lane_mean11.424999999999834
in-drivable-lane_min2.84999999999999
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 12.783199807307168, "get_ui_image": 0.02862994279789984, "step_physics": 0.11296727436964556, "survival_time": 59.99999999999873, "driven_lanedir": 7.910245324265457, "get_state_dump": 0.0046791532057508835, "get_robot_state": 0.003700020906827928, "sim_render-ego0": 0.003769012811678236, "get_duckie_state": 1.2784476681216174e-06, "in-drivable-lane": 20.549999999999297, "deviation-heading": 17.708927742360775, "agent_compute-ego0": 0.012584614614761442, "complete-iteration": 0.18023791817403853, "set_robot_commands": 0.002218064023096496, "deviation-center-line": 2.9487673610124125, "driven_lanedir_consec": 6.824765385371238, "sim_compute_sim_state": 0.009645038997005364, "sim_compute_performance-ego0": 0.0019636517460399822}, "LF-norm-zigzag-000-ego0": {"driven_any": 5.776825089301235, "get_ui_image": 0.03654651453929582, "step_physics": 0.15124684366686592, "survival_time": 30.400000000000297, "driven_lanedir": 2.8284560285721154, "get_state_dump": 0.004428791490877399, "get_robot_state": 0.0035417753291639006, "sim_render-ego0": 0.003633662006146411, "get_duckie_state": 1.2144079349311115e-06, "in-drivable-lane": 13.85000000000013, "deviation-heading": 10.091427167705346, "agent_compute-ego0": 0.012298886607628932, "complete-iteration": 0.22764010578149255, "set_robot_commands": 0.002057696798164856, "deviation-center-line": 1.278969759484007, "driven_lanedir_consec": 1.50399261000532, "sim_compute_sim_state": 0.011947934067699511, "sim_compute_performance-ego0": 0.001866664401024629}, "LF-norm-techtrack-000-ego0": {"driven_any": 1.1108536577430677, "get_ui_image": 0.03340104537281564, "step_physics": 0.13979796665470776, "survival_time": 6.099999999999986, "driven_lanedir": 0.6909028340755847, "get_state_dump": 0.004605773987808848, "get_robot_state": 0.0036006981764382462, "sim_render-ego0": 0.003748626243777391, "get_duckie_state": 1.2618739430497333e-06, "in-drivable-lane": 2.84999999999999, "deviation-heading": 0.9927297859000894, "agent_compute-ego0": 0.012553693802376104, "complete-iteration": 0.21027040093894897, "set_robot_commands": 0.002127542728331031, "deviation-center-line": 0.18342079996427588, "driven_lanedir_consec": 0.6864336393322421, "sim_compute_sim_state": 0.008452353438710779, "sim_compute_performance-ego0": 0.001904774487503176}, "LF-norm-small_loop-000-ego0": {"driven_any": 13.473174627929213, "get_ui_image": 0.02618073206162274, "step_physics": 0.09749849114588754, "survival_time": 59.99999999999873, "driven_lanedir": 11.573141151652914, "get_state_dump": 0.004460157700918993, "get_robot_state": 0.0035430704127938227, "sim_render-ego0": 0.0036231576155663337, "get_duckie_state": 1.2953215892070737e-06, "in-drivable-lane": 8.449999999999916, "deviation-heading": 12.014652954184486, "agent_compute-ego0": 0.01203771455401088, "complete-iteration": 0.157438515425721, "set_robot_commands": 0.00207389006507486, "deviation-center-line": 4.051366195135703, "driven_lanedir_consec": 8.425935209053145, "sim_compute_sim_state": 0.006081345674894334, "sim_compute_performance-ego0": 0.0018658306477568132}}
set_robot_commands_max0.002218064023096496
set_robot_commands_mean0.002119298403666811
set_robot_commands_median0.0021007163967029455
set_robot_commands_min0.002057696798164856
sim_compute_performance-ego0_max0.0019636517460399822
sim_compute_performance-ego0_mean0.00190023032058115
sim_compute_performance-ego0_median0.0018857194442639025
sim_compute_performance-ego0_min0.0018658306477568132
sim_compute_sim_state_max0.011947934067699511
sim_compute_sim_state_mean0.009031668044577498
sim_compute_sim_state_median0.00904869621785807
sim_compute_sim_state_min0.006081345674894334
sim_render-ego0_max0.003769012811678236
sim_render-ego0_mean0.003693614669292093
sim_render-ego0_median0.003691144124961901
sim_render-ego0_min0.0036231576155663337
simulation-passed1
step_physics_max0.15124684366686592
step_physics_mean0.1253776439592767
step_physics_median0.12638262051217666
step_physics_min0.09749849114588754
survival_time_max59.99999999999873
survival_time_mean39.12499999999943
survival_time_min6.099999999999986
No reset possible
58344LFv-simsuccessyes0:23:18
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58341LFv-simsuccessyes0:21:23
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58334LFv-simsuccessyes0:23:56
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58330LFv-simsuccessyes0:11:53
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58327LFv-simsuccessyes0:14:58
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58319LFv-simsuccessyes0:23:11
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52352LFv-simerrorno0:08:07
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140692377211520
- M:video_aido:cmdline(in:/;out:/) 140692377212960
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41745LFv-simsuccessno0:07:40
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38221LFv-simsuccessno0:07:29
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38212LFv-simerrorno0:00:35
The container "evalu [...]
The container "evaluator" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.49-py3.8.egg/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/media/sdb/duckietown-tmp-evals/production/aido5-LF-sim-validation/submission9303/LFv-sim-mont05-227ea22a5fff-1-job38212-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36343LFv-simerrorno0:00:44
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9303/LFv-sim-Sandy1-sandy-1-job36343-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36342LFv-simerrorno0:00:46
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9303/LFv-sim-Sandy2-sandy-1-job36342-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35770LFv-simsuccessno0:01:06
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35375LFv-simerrorno0:21:10
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9303/LFv-sim-reg03-0c28c9d61367-1-job35375:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9303/LFv-sim-reg03-0c28c9d61367-1-job35375/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9303/LFv-sim-reg03-0c28c9d61367-1-job35375/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9303/LFv-sim-reg03-0c28c9d61367-1-job35375/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9303/LFv-sim-reg03-0c28c9d61367-1-job35375/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9303/LFv-sim-reg03-0c28c9d61367-1-job35375/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35374LFv-simerrorno0:21:12
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9303/LFv-sim-reg02-1b92df2e7e91-1-job35374:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9303/LFv-sim-reg02-1b92df2e7e91-1-job35374/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9303/LFv-sim-reg02-1b92df2e7e91-1-job35374/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9303/LFv-sim-reg02-1b92df2e7e91-1-job35374/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9303/LFv-sim-reg02-1b92df2e7e91-1-job35374/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9303/LFv-sim-reg02-1b92df2e7e91-1-job35374/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35008LFv-simsuccessno0:20:48
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34643LFv-simsuccessno0:22:12
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible