Duckietown Challenges Home Challenges Submissions

Submission 6842

Submission6842
Competingyes
Challengeaido5-LF-sim-validation
UserHimanshu Arora 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58546
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58546

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58546LFv-simsuccessyes0:20:48
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median1.369809816459718
survival_time_median38.79999999999993
deviation-center-line_median0.9757756004173964
in-drivable-lane_median26.97499999999979


other stats
agent_compute-ego0_max0.013187305086609945
agent_compute-ego0_mean0.012778584672878174
agent_compute-ego0_median0.01268754199795101
agent_compute-ego0_min0.012551949609000728
complete-iteration_max0.17992923052414603
complete-iteration_mean0.17280386029607137
complete-iteration_median0.17404510254348848
complete-iteration_min0.16319600557316247
deviation-center-line_max2.85943674679007
deviation-center-line_mean1.3253461977998249
deviation-center-line_min0.49039684357443686
deviation-heading_max6.737338443463686
deviation-heading_mean3.717919967894424
deviation-heading_median2.7954353768650764
deviation-heading_min2.543470674383859
driven_any_max7.25501758244321
driven_any_mean5.017088138792421
driven_any_median5.273899010405366
driven_any_min2.2655369519157462
driven_lanedir_consec_max3.5455711447493834
driven_lanedir_consec_mean1.7513336228236152
driven_lanedir_consec_min0.7201437136256412
driven_lanedir_max3.5455711447493834
driven_lanedir_mean1.7513336228236152
driven_lanedir_median1.369809816459718
driven_lanedir_min0.7201437136256412
get_duckie_state_max2.176775408618023e-06
get_duckie_state_mean2.0824851748588395e-06
get_duckie_state_median2.11062866838897e-06
get_duckie_state_min1.931907954039397e-06
get_robot_state_max0.003833086504412524
get_robot_state_mean0.003708110540313848
get_robot_state_median0.003696670668013524
get_robot_state_min0.003606014320815819
get_state_dump_max0.004878664292352048
get_state_dump_mean0.004679044399894286
get_state_dump_median0.004665826682774526
get_state_dump_min0.004505859941676043
get_ui_image_max0.03521759751914204
get_ui_image_mean0.03103338344636485
get_ui_image_median0.030220155644404313
get_ui_image_min0.02847562497750872
in-drivable-lane_max28.649999999999608
in-drivable-lane_mean23.212499999999828
in-drivable-lane_min10.250000000000128
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 7.25501758244321, "get_ui_image": 0.02877092721327296, "step_physics": 0.10490713704307124, "survival_time": 52.94999999999913, "driven_lanedir": 3.5455711447493834, "get_state_dump": 0.004717687615808451, "get_robot_state": 0.003718326451643458, "sim_render-ego0": 0.003798641348784825, "get_duckie_state": 2.1131533496784713e-06, "in-drivable-lane": 25.69999999999938, "deviation-heading": 6.737338443463686, "agent_compute-ego0": 0.012551949609000728, "complete-iteration": 0.17264916267035144, "set_robot_commands": 0.0022672695933647876, "deviation-center-line": 2.85943674679007, "driven_lanedir_consec": 3.5455711447493834, "sim_compute_sim_state": 0.009846558210984716, "sim_compute_performance-ego0": 0.0019870758056640624}, "LF-norm-zigzag-000-ego0": {"driven_any": 4.669430088706166, "get_ui_image": 0.03521759751914204, "step_physics": 0.10557488600413004, "survival_time": 34.45000000000018, "driven_lanedir": 0.7201437136256412, "get_state_dump": 0.004505859941676043, "get_robot_state": 0.003606014320815819, "sim_render-ego0": 0.00371756380882816, "get_duckie_state": 2.108103987099468e-06, "in-drivable-lane": 28.250000000000195, "deviation-heading": 2.543470674383859, "agent_compute-ego0": 0.012745545221411664, "complete-iteration": 0.17992923052414603, "set_robot_commands": 0.0021875982699186905, "deviation-center-line": 0.4996437211347808, "driven_lanedir_consec": 0.7201437136256412, "sim_compute_sim_state": 0.010347349056299185, "sim_compute_performance-ego0": 0.0019444648770318516}, "LF-norm-techtrack-000-ego0": {"driven_any": 5.878367932104565, "get_ui_image": 0.03166938407553567, "step_physics": 0.10242625519081398, "survival_time": 43.149999999999686, "driven_lanedir": 1.9030394355120637, "get_state_dump": 0.004613965749740601, "get_robot_state": 0.00367501488438359, "sim_render-ego0": 0.00373961141815892, "get_duckie_state": 1.931907954039397e-06, "in-drivable-lane": 28.649999999999608, "deviation-heading": 2.995115020731234, "agent_compute-ego0": 0.012629538774490356, "complete-iteration": 0.17544104241662556, "set_robot_commands": 0.0021635625097486707, "deviation-center-line": 1.4519074797000118, "driven_lanedir_consec": 1.9030394355120637, "sim_compute_sim_state": 0.012423444401334834, "sim_compute_performance-ego0": 0.002018600426338337}, "LF-norm-small_loop-000-ego0": {"driven_any": 2.2655369519157462, "get_ui_image": 0.02847562497750872, "step_physics": 0.09846804045528346, "survival_time": 17.25000000000011, "driven_lanedir": 0.8365801974073721, "get_state_dump": 0.004878664292352048, "get_robot_state": 0.003833086504412524, "sim_render-ego0": 0.003973663886847524, "get_duckie_state": 2.176775408618023e-06, "in-drivable-lane": 10.250000000000128, "deviation-heading": 2.595755732998919, "agent_compute-ego0": 0.013187305086609945, "complete-iteration": 0.16319600557316247, "set_robot_commands": 0.002320306838592353, "deviation-center-line": 0.49039684357443686, "driven_lanedir_consec": 0.8365801974073721, "sim_compute_sim_state": 0.005898206909267889, "sim_compute_performance-ego0": 0.002073449206490048}}
set_robot_commands_max0.002320306838592353
set_robot_commands_mean0.0022346843029061253
set_robot_commands_median0.002227433931641739
set_robot_commands_min0.0021635625097486707
sim_compute_performance-ego0_max0.002073449206490048
sim_compute_performance-ego0_mean0.0020058975788810746
sim_compute_performance-ego0_median0.0020028381160012
sim_compute_performance-ego0_min0.0019444648770318516
sim_compute_sim_state_max0.012423444401334834
sim_compute_sim_state_mean0.009628889644471656
sim_compute_sim_state_median0.01009695363364195
sim_compute_sim_state_min0.005898206909267889
sim_render-ego0_max0.003973663886847524
sim_render-ego0_mean0.0038073701156548577
sim_render-ego0_median0.003769126383471872
sim_render-ego0_min0.00371756380882816
simulation-passed1
step_physics_max0.10557488600413004
step_physics_mean0.10284407967332468
step_physics_median0.1036666961169426
step_physics_min0.09846804045528346
survival_time_max52.94999999999913
survival_time_mean36.949999999999775
survival_time_min17.25000000000011
No reset possible
58543LFv-simsuccessyes0:17:32
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52471LFv-simerrorno0:04:49
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139712162812064
- M:video_aido:cmdline(in:/;out:/) 139712162730624
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41803LFv-simsuccessno0:09:38
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38353LFv-simsuccessno0:18:52
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36449LFv-simsuccessno0:09:27
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35871LFv-simsuccessno0:01:02
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35869LFv-simsuccessno0:01:02
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35448LFv-simerrorno0:22:52
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6842/LFv-sim-reg05-b2dee9d94ee0-1-job35448:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6842/LFv-sim-reg05-b2dee9d94ee0-1-job35448/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6842/LFv-sim-reg05-b2dee9d94ee0-1-job35448/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6842/LFv-sim-reg05-b2dee9d94ee0-1-job35448/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6842/LFv-sim-reg05-b2dee9d94ee0-1-job35448/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6842/LFv-sim-reg05-b2dee9d94ee0-1-job35448/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35132LFv-simsuccessno0:23:28
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
33544LFv-simsuccessno0:25:17
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
33433LFv-simsuccessno0:18:04
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
33432LFv-simsuccessno0:16:33
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible