Duckietown Challenges Home Challenges Submissions

Submission 6844

Submission6844
Competingyes
Challengeaido5-LF-sim-validation
UserDaniil Lisus
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58541
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58541

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58541LFv-simsuccessyes0:26:47
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median3.4586184966108715
survival_time_median59.99999999999873
deviation-center-line_median1.4871919268611844
in-drivable-lane_median9.34999999999987


other stats
agent_compute-ego0_max0.013927590279352095
agent_compute-ego0_mean0.013227416796572909
agent_compute-ego0_median0.013084196329712371
agent_compute-ego0_min0.012813684247514789
complete-iteration_max0.2117442644777752
complete-iteration_mean0.17832384177920346
complete-iteration_median0.17385619595882595
complete-iteration_min0.15383871072138677
deviation-center-line_max3.3195985661719636
deviation-center-line_mean1.6116857870353547
deviation-center-line_min0.15276072824708598
deviation-heading_max11.330213185393845
deviation-heading_mean7.00335544152427
deviation-heading_median7.6495089154358435
deviation-heading_min1.3841907498315469
driven_any_max7.144832259704721
driven_any_mean4.553303590265521
driven_any_median5.231529611328
driven_any_min0.6053228787013621
driven_lanedir_consec_max6.825002661706019
driven_lanedir_consec_mean3.4795283021233225
driven_lanedir_consec_min0.17587355356552914
driven_lanedir_max6.825002661706019
driven_lanedir_mean3.48771862208529
driven_lanedir_median3.474999136534806
driven_lanedir_min0.17587355356552914
get_duckie_state_max1.5170801253545852e-06
get_duckie_state_mean1.4446596311665484e-06
get_duckie_state_median1.426739656954979e-06
get_duckie_state_min1.408079085401651e-06
get_robot_state_max0.0038386739435650056
get_robot_state_mean0.003774472574156625
get_robot_state_median0.0037799449288577064
get_robot_state_min0.003699326495346082
get_state_dump_max0.004875912552788144
get_state_dump_mean0.004777427045634706
get_state_dump_median0.004765595127204177
get_state_dump_min0.004702605375342325
get_ui_image_max0.03820052317210606
get_ui_image_mean0.031529014214715566
get_ui_image_median0.030823665097988617
get_ui_image_min0.026268203490778966
in-drivable-lane_max29.9500000000002
in-drivable-lane_mean13.34999999999998
in-drivable-lane_min4.749999999999982
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 7.144832259704721, "get_ui_image": 0.028519765820531028, "step_physics": 0.0958140990219942, "survival_time": 59.99999999999873, "driven_lanedir": 6.825002661706019, "get_state_dump": 0.004702605375342325, "get_robot_state": 0.003699326495346082, "sim_render-ego0": 0.003839362372367408, "get_duckie_state": 1.4245559730498023e-06, "in-drivable-lane": 4.749999999999982, "deviation-heading": 7.458659263323708, "agent_compute-ego0": 0.012813684247514789, "complete-iteration": 0.16358913132590516, "set_robot_commands": 0.0022082805236511484, "deviation-center-line": 1.6747539878340445, "driven_lanedir_consec": 6.825002661706019, "sim_compute_sim_state": 0.009888628539594385, "sim_compute_performance-ego0": 0.0020131751162920466}, "LF-norm-zigzag-000-ego0": {"driven_any": 0.6053228787013621, "get_ui_image": 0.03820052317210606, "step_physics": 0.13080579184350513, "survival_time": 8.349999999999984, "driven_lanedir": 0.17587355356552914, "get_state_dump": 0.004875912552788144, "get_robot_state": 0.0038386739435650056, "sim_render-ego0": 0.004055382240386237, "get_duckie_state": 1.5170801253545852e-06, "in-drivable-lane": 5.949999999999984, "deviation-heading": 1.3841907498315469, "agent_compute-ego0": 0.013927590279352095, "complete-iteration": 0.2117442644777752, "set_robot_commands": 0.002374182144800822, "deviation-center-line": 0.15276072824708598, "driven_lanedir_consec": 0.17587355356552914, "sim_compute_sim_state": 0.01146711338134039, "sim_compute_performance-ego0": 0.0021053055922190347}, "LF-norm-techtrack-000-ego0": {"driven_any": 5.478319190379025, "get_ui_image": 0.0331275643754462, "step_physics": 0.1079054095167403, "survival_time": 59.99999999999873, "driven_lanedir": 3.925284545210561, "get_state_dump": 0.004789066949950765, "get_robot_state": 0.003817748864624125, "sim_render-ego0": 0.003908858311166374, "get_duckie_state": 1.4289233408601556e-06, "in-drivable-lane": 12.749999999999757, "deviation-heading": 11.330213185393845, "agent_compute-ego0": 0.012965964834259313, "complete-iteration": 0.18412326059174677, "set_robot_commands": 0.0022798537413940938, "deviation-center-line": 3.3195985661719636, "driven_lanedir_consec": 3.925284545210561, "sim_compute_sim_state": 0.01314908261104587, "sim_compute_performance-ego0": 0.0020893902504672416}, "LF-norm-small_loop-000-ego0": {"driven_any": 4.9847400322769735, "get_ui_image": 0.026268203490778966, "step_physics": 0.0910934869891698, "survival_time": 59.99999999999873, "driven_lanedir": 3.024713727859051, "get_state_dump": 0.004742123304457589, "get_robot_state": 0.003742140993091288, "sim_render-ego0": 0.0038672704482257216, "get_duckie_state": 1.408079085401651e-06, "in-drivable-lane": 29.9500000000002, "deviation-heading": 7.840358567547979, "agent_compute-ego0": 0.01320242782516543, "complete-iteration": 0.15383871072138677, "set_robot_commands": 0.00226393845754301, "deviation-center-line": 1.2996298658883243, "driven_lanedir_consec": 2.991952448011182, "sim_compute_sim_state": 0.006568526149689407, "sim_compute_performance-ego0": 0.0019998969285315417}}
set_robot_commands_max0.002374182144800822
set_robot_commands_mean0.0022815637168472685
set_robot_commands_median0.002271896099468552
set_robot_commands_min0.0022082805236511484
sim_compute_performance-ego0_max0.0021053055922190347
sim_compute_performance-ego0_mean0.0020519419718774664
sim_compute_performance-ego0_median0.0020512826833796443
sim_compute_performance-ego0_min0.0019998969285315417
sim_compute_sim_state_max0.01314908261104587
sim_compute_sim_state_mean0.010268337670417512
sim_compute_sim_state_median0.010677870960467389
sim_compute_sim_state_min0.006568526149689407
sim_render-ego0_max0.004055382240386237
sim_render-ego0_mean0.003917718343036435
sim_render-ego0_median0.003888064379696048
sim_render-ego0_min0.003839362372367408
simulation-passed1
step_physics_max0.13080579184350513
step_physics_mean0.10640469684285236
step_physics_median0.10185975426936723
step_physics_min0.0910934869891698
survival_time_max59.99999999999873
survival_time_mean47.08749999999904
survival_time_min8.349999999999984
No reset possible
58540LFv-simsuccessyes0:24:27
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58538LFv-simsuccessyes0:27:17
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58533LFv-simsuccessyes0:29:18
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52469LFv-simerrorno0:08:12
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140225157518000
- M:video_aido:cmdline(in:/;out:/) 140225157517856
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41797LFv-simsuccessno0:09:34
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38352LFv-simsuccessno0:10:06
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38350LFv-simsuccessno0:09:40
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36447LFv-simsuccessno0:10:40
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36446LFv-simsuccessno0:10:44
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35862LFv-simsuccessno0:01:04
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35446LFv-simerrorno0:22:37
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6844/LFv-sim-reg02-1b92df2e7e91-1-job35446:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6844/LFv-sim-reg02-1b92df2e7e91-1-job35446/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6844/LFv-sim-reg02-1b92df2e7e91-1-job35446/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6844/LFv-sim-reg02-1b92df2e7e91-1-job35446/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6844/LFv-sim-reg02-1b92df2e7e91-1-job35446/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6844/LFv-sim-reg02-1b92df2e7e91-1-job35446/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35130LFv-simsuccessno0:24:01
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
33458LFv-simsuccessno0:26:45
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
33437LFv-simsuccessno0:21:41
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
33435LFv-simsuccessno0:23:09
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible