Duckietown Challenges Home Challenges Submissions

Submission 9340

Submission9340
Competingyes
Challengeaido5-LF-sim-validation
UserJerome Labonte 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58156
Next
User labelsim-exercise-1
Admin priority50
Blessingn/a
User priority50

58156

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58156LFv-simsuccessyes0:36:12
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median9.04034367648504
survival_time_median59.99999999999873
deviation-center-line_median3.277633019600924
in-drivable-lane_median5.049999999999827


other stats
agent_compute-ego0_max0.012447380999740614
agent_compute-ego0_mean0.011997626586227654
agent_compute-ego0_median0.012058911974891166
agent_compute-ego0_min0.011425301395387673
complete-iteration_max0.19706354245804905
complete-iteration_mean0.1796951086826638
complete-iteration_median0.17869770387924283
complete-iteration_min0.1643214845141205
deviation-center-line_max3.892004011333176
deviation-center-line_mean3.239796271599934
deviation-center-line_min2.511915035864712
deviation-heading_max12.370116231798322
deviation-heading_mean11.481960440538607
deviation-heading_median11.589930780417902
deviation-heading_min10.377863969520297
driven_any_max11.731343783367628
driven_any_mean10.881900574922774
driven_any_median10.916370070205195
driven_any_min9.96351837591308
driven_lanedir_consec_max10.16192888842548
driven_lanedir_consec_mean8.621917747812162
driven_lanedir_consec_min6.245054749853091
driven_lanedir_max11.284844636622305
driven_lanedir_mean9.69588107944786
driven_lanedir_median10.062262896562665
driven_lanedir_min7.374153888043809
get_duckie_state_max1.3106073765433103e-06
get_duckie_state_mean1.2784875020684522e-06
get_duckie_state_median1.3040563248277802e-06
get_duckie_state_min1.1952299820749384e-06
get_robot_state_max0.003854239016746502
get_robot_state_mean0.0037029204668786895
get_robot_state_median0.0036897264750650176
get_robot_state_min0.0035779899006382213
get_state_dump_max0.004839275599915618
get_state_dump_mean0.004663176143584025
get_state_dump_median0.004643445606533435
get_state_dump_min0.00452653776135361
get_ui_image_max0.03403557999092236
get_ui_image_mean0.030119425734671277
get_ui_image_median0.030035253269884807
get_ui_image_min0.026371616407993136
in-drivable-lane_max14.949999999999328
in-drivable-lane_mean6.637499999999741
in-drivable-lane_min1.4999999999999811
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 11.363829095372475, "get_ui_image": 0.0271287359464774, "step_physics": 0.102237783006387, "survival_time": 59.99999999999873, "driven_lanedir": 10.40418155782263, "get_state_dump": 0.004545517706255631, "get_robot_state": 0.0035779899006382213, "sim_render-ego0": 0.0036981905826819527, "get_duckie_state": 1.3104088598246578e-06, "in-drivable-lane": 4.8999999999997605, "deviation-heading": 12.370116231798322, "agent_compute-ego0": 0.0121033370345955, "complete-iteration": 0.16714174364329773, "set_robot_commands": 0.002151846190872637, "deviation-center-line": 3.2639775075626924, "driven_lanedir_consec": 8.360343117667385, "sim_compute_sim_state": 0.009686519065367789, "sim_compute_performance-ego0": 0.0019237431360224105}, "LF-norm-zigzag-000-ego0": {"driven_any": 9.96351837591308, "get_ui_image": 0.03403557999092236, "step_physics": 0.12104982242249604, "survival_time": 56.9499999999989, "driven_lanedir": 7.374153888043809, "get_state_dump": 0.00452653776135361, "get_robot_state": 0.0036411852167363753, "sim_render-ego0": 0.0037815062623274952, "get_duckie_state": 1.1952299820749384e-06, "in-drivable-lane": 14.949999999999328, "deviation-heading": 10.377863969520297, "agent_compute-ego0": 0.012014486915186832, "complete-iteration": 0.19706354245804905, "set_robot_commands": 0.0021840089245846396, "deviation-center-line": 2.511915035864712, "driven_lanedir_consec": 6.245054749853091, "sim_compute_sim_state": 0.013744606678945975, "sim_compute_performance-ego0": 0.001999566220400626}, "LF-norm-techtrack-000-ego0": {"driven_any": 11.731343783367628, "get_ui_image": 0.03294177059329221, "step_physics": 0.11477484333822868, "survival_time": 59.99999999999873, "driven_lanedir": 11.284844636622305, "get_state_dump": 0.004839275599915618, "get_robot_state": 0.003854239016746502, "sim_render-ego0": 0.0038662915622860463, "get_duckie_state": 1.2977037898309026e-06, "in-drivable-lane": 1.4999999999999811, "deviation-heading": 11.427904357146163, "agent_compute-ego0": 0.012447380999740614, "complete-iteration": 0.19025366411518793, "set_robot_commands": 0.002310676042682225, "deviation-center-line": 3.2912885316391565, "driven_lanedir_consec": 10.16192888842548, "sim_compute_sim_state": 0.013037301221556906, "sim_compute_performance-ego0": 0.00208983830270124}, "LF-norm-small_loop-000-ego0": {"driven_any": 10.468911045037917, "get_ui_image": 0.026371616407993136, "step_physics": 0.10375001706449712, "survival_time": 59.99999999999873, "driven_lanedir": 9.720344235302695, "get_state_dump": 0.004741373506811239, "get_robot_state": 0.003738267733393661, "sim_render-ego0": 0.0037786998319983183, "get_duckie_state": 1.3106073765433103e-06, "in-drivable-lane": 5.199999999999893, "deviation-heading": 11.751957203689638, "agent_compute-ego0": 0.011425301395387673, "complete-iteration": 0.1643214845141205, "set_robot_commands": 0.002265369366051057, "deviation-center-line": 3.892004011333176, "driven_lanedir_consec": 9.720344235302695, "sim_compute_sim_state": 0.006152874226375583, "sim_compute_performance-ego0": 0.002008203662106834}}
set_robot_commands_max0.002310676042682225
set_robot_commands_mean0.00222797513104764
set_robot_commands_median0.002224689145317848
set_robot_commands_min0.002151846190872637
sim_compute_performance-ego0_max0.00208983830270124
sim_compute_performance-ego0_mean0.0020053378303077777
sim_compute_performance-ego0_median0.00200388494125373
sim_compute_performance-ego0_min0.0019237431360224105
sim_compute_sim_state_max0.013744606678945975
sim_compute_sim_state_mean0.010655325298061563
sim_compute_sim_state_median0.011361910143462347
sim_compute_sim_state_min0.006152874226375583
sim_render-ego0_max0.0038662915622860463
sim_render-ego0_mean0.003781172059823453
sim_render-ego0_median0.003780103047162907
sim_render-ego0_min0.0036981905826819527
simulation-passed1
step_physics_max0.12104982242249604
step_physics_mean0.1104531164579022
step_physics_median0.1092624302013629
step_physics_min0.102237783006387
survival_time_max59.99999999999873
survival_time_mean59.23749999999877
survival_time_min56.9499999999989
No reset possible
58142LFv-simtimeoutyes----No reset possible
52224LFv-simerrorno0:08:39
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140124326049392
- M:video_aido:cmdline(in:/;out:/) 140124323969536
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41695LFv-simsuccessno0:09:22
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38303LFv-simsuccessno0:09:09
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38144LFv-simabortedno0:00:43
The container "evalu [...]
The container "evaluator" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.49-py3.8.egg/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/media/sdb/duckietown-tmp-evals/production/aido5-LF-sim-validation/submission9340/LFv-sim-mont01-6ef51bb8a9d6-1-job38144-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36518LFv-simabortedno0:10:23
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36517LFv-simabortedno0:00:48
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-Sandy1-sandy-1-job36517-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36516LFv-simabortedno0:00:43
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-Sandy2-sandy-1-job36516-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36289LFv-simabortedno0:00:52
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-Sandy2-sandy-1-job36289-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36288LFv-simabortedno0:01:08
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-Sandy1-sandy-1-job36288-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35729LFv-simabortedno0:01:06
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35728LFv-simabortedno0:01:06
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35724LFv-simabortedno0:01:01
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35554LFv-simabortedno0:23:42
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg05-41573a93de19-1-job35554:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg05-41573a93de19-1-job35554/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg05-41573a93de19-1-job35554/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg05-41573a93de19-1-job35554/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg05-41573a93de19-1-job35554/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg05-41573a93de19-1-job35554/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35540LFv-simabortedno0:25:46
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg04-5753c726a5d0-1-job35540:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg04-5753c726a5d0-1-job35540/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg04-5753c726a5d0-1-job35540/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg04-5753c726a5d0-1-job35540/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg04-5753c726a5d0-1-job35540/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg04-5753c726a5d0-1-job35540/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35512LFv-simabortedno1:05:45
I can see how the jo [...]
I can see how the job 35512 is timeout because passed 3945 seconds and the timeout is 3600.0.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35507LFv-simabortedno0:22:02
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg02-1b92df2e7e91-1-job35507:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg02-1b92df2e7e91-1-job35507/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg02-1b92df2e7e91-1-job35507/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg02-1b92df2e7e91-1-job35507/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg02-1b92df2e7e91-1-job35507/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg02-1b92df2e7e91-1-job35507/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35340LFv-simabortedno0:22:11
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg01-94a6fab21ac9-1-job35340:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg01-94a6fab21ac9-1-job35340/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg01-94a6fab21ac9-1-job35340/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg01-94a6fab21ac9-1-job35340/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg01-94a6fab21ac9-1-job35340/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9340/LFv-sim-reg01-94a6fab21ac9-1-job35340/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34976LFv-simabortedno0:23:20
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34712LFv-simabortedno0:29:24
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34711LFv-simabortedno0:25:10
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible