Duckietown Challenges Home Challenges Submissions

Submission 9272

Submission9272
Competingyes
Challengeaido5-LF-sim-validation
UserLiam Paull 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58439
Next
User labeltemplate-ros
Admin priority50
Blessingn/a
User priority50

58439

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58439LFv-simsuccessyes0:05:45
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.33269572821839755
survival_time_median5.824999999999987
deviation-center-line_median0.1368039767828624
in-drivable-lane_median3.899999999999987


other stats
agent_compute-ego0_max0.013411689939953031
agent_compute-ego0_mean0.013050549735075316
agent_compute-ego0_median0.013108731361344208
agent_compute-ego0_min0.012573046277659808
complete-iteration_max0.23284610487380117
complete-iteration_mean0.20658532307332975
complete-iteration_median0.2075105700161124
complete-iteration_min0.17847404738729314
deviation-center-line_max0.22764305121811432
deviation-center-line_mean0.14766636213426176
deviation-center-line_min0.08941444375320795
deviation-heading_max2.103862091739065
deviation-heading_mean1.1685118128742642
deviation-heading_median1.084001976130951
deviation-heading_min0.40218120749608927
driven_any_max3.637365739647189
driven_any_mean1.9644600938530443
driven_any_median1.4755275803842491
driven_any_min1.26941947499649
driven_lanedir_consec_max0.640673130335168
driven_lanedir_consec_mean0.3909701534865413
driven_lanedir_consec_min0.25781602717420204
driven_lanedir_max0.640673130335168
driven_lanedir_mean0.4011790078882024
driven_lanedir_median0.3531134370217197
driven_lanedir_min0.25781602717420204
get_duckie_state_max1.457987221643384e-06
get_duckie_state_mean1.3699385759148173e-06
get_duckie_state_median1.364882724299423e-06
get_duckie_state_min1.2920016334170388e-06
get_robot_state_max0.003874948919927321
get_robot_state_mean0.0038157342404602783
get_robot_state_median0.003811161630035441
get_robot_state_min0.003765664781842913
get_state_dump_max0.004786108815392783
get_state_dump_mean0.0047175417629470116
get_state_dump_median0.004730630248728579
get_state_dump_min0.004622797738938105
get_ui_image_max0.037910931515243815
get_ui_image_mean0.03227556809672097
get_ui_image_median0.03144634929372687
get_ui_image_min0.028298642284186312
in-drivable-lane_max11.65000000000004
in-drivable-lane_mean5.475000000000002
in-drivable-lane_min2.4499999999999935
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 3.637365739647189, "get_ui_image": 0.03071729844387579, "step_physics": 0.1250226479923858, "survival_time": 13.400000000000055, "driven_lanedir": 0.25781602717420204, "get_state_dump": 0.004734234295813124, "get_robot_state": 0.003874948919927321, "sim_render-ego0": 0.004033471571911667, "get_duckie_state": 1.457987221643384e-06, "in-drivable-lane": 11.65000000000004, "deviation-heading": 1.3616443694993834, "agent_compute-ego0": 0.013034264837499, "complete-iteration": 0.1963973860758388, "set_robot_commands": 0.002378531105013142, "deviation-center-line": 0.16506542598227603, "driven_lanedir_consec": 0.25781602717420204, "sim_compute_sim_state": 0.010295469078432672, "sim_compute_performance-ego0": 0.0022134647936625997}, "LF-norm-zigzag-000-ego0": {"driven_any": 1.26941947499649, "get_ui_image": 0.037910931515243815, "step_physics": 0.1543017490854803, "survival_time": 5.249999999999989, "driven_lanedir": 0.26765836918319263, "get_state_dump": 0.0047270262016440336, "get_robot_state": 0.003767593851629293, "sim_render-ego0": 0.0038920528483840657, "get_duckie_state": 1.2955575619103773e-06, "in-drivable-lane": 3.799999999999989, "deviation-heading": 0.8063595827625185, "agent_compute-ego0": 0.013183197885189415, "complete-iteration": 0.23284610487380117, "set_robot_commands": 0.0024043209147903153, "deviation-center-line": 0.10854252758344876, "driven_lanedir_consec": 0.26765987656504, "sim_compute_sim_state": 0.0104491170847191, "sim_compute_performance-ego0": 0.002107831667054374}, "LF-norm-techtrack-000-ego0": {"driven_any": 1.2782693400702378, "get_ui_image": 0.03217540014357794, "step_physics": 0.14783373787289575, "survival_time": 5.1999999999999895, "driven_lanedir": 0.4385685048602468, "get_state_dump": 0.004622797738938105, "get_robot_state": 0.003765664781842913, "sim_render-ego0": 0.00389768055507115, "get_duckie_state": 1.2920016334170388e-06, "in-drivable-lane": 2.4499999999999935, "deviation-heading": 2.103862091739065, "agent_compute-ego0": 0.013411689939953031, "complete-iteration": 0.21862375395638603, "set_robot_commands": 0.0022265706743512833, "deviation-center-line": 0.22764305121811432, "driven_lanedir_consec": 0.3977315798717551, "sim_compute_sim_state": 0.008464334124610538, "sim_compute_performance-ego0": 0.0021341278439476375}, "LF-norm-small_loop-000-ego0": {"driven_any": 1.6727858206982602, "get_ui_image": 0.028298642284186312, "step_physics": 0.115069448485855, "survival_time": 6.399999999999985, "driven_lanedir": 0.640673130335168, "get_state_dump": 0.004786108815392783, "get_robot_state": 0.003854729408441588, "sim_render-ego0": 0.0038895274317541782, "get_duckie_state": 1.434207886688469e-06, "in-drivable-lane": 3.999999999999986, "deviation-heading": 0.40218120749608927, "agent_compute-ego0": 0.012573046277659808, "complete-iteration": 0.17847404738729314, "set_robot_commands": 0.002429187759872555, "deviation-center-line": 0.08941444375320795, "driven_lanedir_consec": 0.640673130335168, "sim_compute_sim_state": 0.005378593770108481, "sim_compute_performance-ego0": 0.002103206723235374}}
set_robot_commands_max0.002429187759872555
set_robot_commands_mean0.0023596526135068237
set_robot_commands_median0.0023914260099017286
set_robot_commands_min0.0022265706743512833
sim_compute_performance-ego0_max0.0022134647936625997
sim_compute_performance-ego0_mean0.002139657756974996
sim_compute_performance-ego0_median0.0021209797555010058
sim_compute_performance-ego0_min0.002103206723235374
sim_compute_sim_state_max0.0104491170847191
sim_compute_sim_state_mean0.008646878514467698
sim_compute_sim_state_median0.009379901601521605
sim_compute_sim_state_min0.005378593770108481
sim_render-ego0_max0.004033471571911667
sim_render-ego0_mean0.003928183101780265
sim_render-ego0_median0.0038948667017276073
sim_render-ego0_min0.0038895274317541782
simulation-passed1
step_physics_max0.1543017490854803
step_physics_mean0.1355568958591542
step_physics_median0.13642819293264077
step_physics_min0.115069448485855
survival_time_max13.400000000000055
survival_time_mean7.562500000000004
survival_time_min5.1999999999999895
No reset possible
58431LFv-simsuccessyes0:06:27
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52377LFv-simerrorno0:02:00
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139665676225840
- M:video_aido:cmdline(in:/;out:/) 139665676223200
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41766LFv-simsuccessno0:06:47
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41765LFv-simsuccessno0:04:33
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38262LFv-simsuccessno0:05:19
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36376LFv-simsuccessno0:04:25
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36374LFv-simsuccessno0:05:43
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35808LFv-simsuccessno0:01:06
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35400LFv-simerrorno0:08:37
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9272/LFv-sim-reg01-94a6fab21ac9-1-job35400:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9272/LFv-sim-reg01-94a6fab21ac9-1-job35400/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9272/LFv-sim-reg01-94a6fab21ac9-1-job35400/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9272/LFv-sim-reg01-94a6fab21ac9-1-job35400/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9272/LFv-sim-reg01-94a6fab21ac9-1-job35400/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9272/LFv-sim-reg01-94a6fab21ac9-1-job35400/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35039LFv-simsuccessno0:13:34
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34581LFv-simsuccessno0:12:21
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34580LFv-simsuccessno0:14:13
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible