Duckietown Challenges Home Challenges Submissions

Submission 9319

Submission9319
Competingyes
Challengeaido5-LF-sim-validation
UserYishu Malhotra 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58203
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58203

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58203LFv-simsuccessyes0:06:46
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.6702322779381074
survival_time_median8.574999999999987
deviation-center-line_median0.20603486405638333
in-drivable-lane_median6.199999999999991


other stats
agent_compute-ego0_max0.04698152653872967
agent_compute-ego0_mean0.022112465921700443
agent_compute-ego0_median0.01427000641454885
agent_compute-ego0_min0.012928324318974395
complete-iteration_max0.2024592347443104
complete-iteration_mean0.19077604466589704
complete-iteration_median0.1967642916665401
complete-iteration_min0.16711636058619764
deviation-center-line_max0.2731380942152386
deviation-center-line_mean0.18722730414662708
deviation-center-line_min0.0637013942585031
deviation-heading_max1.9902418726395847
deviation-heading_mean1.211640955816277
deviation-heading_median1.1837605062026506
deviation-heading_min0.4888009382202221
driven_any_max4.498708115406316
driven_any_mean2.8166322302310314
driven_any_median2.4807944909898865
driven_any_min1.806231823538036
driven_lanedir_consec_max0.6890992938312586
driven_lanedir_consec_mean0.6299015198129495
driven_lanedir_consec_min0.4900422295443247
driven_lanedir_max0.6890992938312586
driven_lanedir_mean0.6299015198129495
driven_lanedir_median0.6702322779381074
driven_lanedir_min0.4900422295443247
get_duckie_state_max1.6987323760986328e-06
get_duckie_state_mean1.6214825519061094e-06
get_duckie_state_median1.6283047112393683e-06
get_duckie_state_min1.530588409047068e-06
get_robot_state_max0.004309583455324173
get_robot_state_mean0.004016531239859527
get_robot_state_median0.003977154875979012
get_robot_state_min0.003802231752155909
get_state_dump_max0.005449287593364716
get_state_dump_mean0.00518999830268337
get_state_dump_median0.005188050783342785
get_state_dump_min0.004934604050683193
get_ui_image_max0.037755115532580714
get_ui_image_mean0.032635456603736686
get_ui_image_median0.031995527135829135
get_ui_image_min0.02879565661070777
in-drivable-lane_max10.700000000000063
in-drivable-lane_mean6.675000000000008
in-drivable-lane_min3.599999999999987
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 2.4638713585967347, "get_ui_image": 0.02879565661070777, "step_physics": 0.0988197222433455, "survival_time": 9.099999999999994, "driven_lanedir": 0.6522918044620434, "get_state_dump": 0.004934604050683193, "get_robot_state": 0.003802231752155909, "sim_render-ego0": 0.003866207404214828, "get_duckie_state": 1.666324386179773e-06, "in-drivable-lane": 6.3999999999999995, "deviation-heading": 1.1376155899120888, "agent_compute-ego0": 0.012928324318974395, "complete-iteration": 0.16711636058619764, "set_robot_commands": 0.0022989856740816043, "deviation-center-line": 0.2731380942152386, "driven_lanedir_consec": 0.6522918044620434, "sim_compute_sim_state": 0.00946338059472256, "sim_compute_performance-ego0": 0.0021046575952748782}, "LF-norm-zigzag-000-ego0": {"driven_any": 2.497717623383038, "get_ui_image": 0.037755115532580714, "step_physics": 0.1209704213672214, "survival_time": 8.04999999999998, "driven_lanedir": 0.4900422295443247, "get_state_dump": 0.005034804344177246, "get_robot_state": 0.003894233409269356, "sim_render-ego0": 0.004113178194305043, "get_duckie_state": 1.530588409047068e-06, "in-drivable-lane": 5.999999999999983, "deviation-heading": 1.2299054224932124, "agent_compute-ego0": 0.01445373488061222, "complete-iteration": 0.2012124444231575, "set_robot_commands": 0.002383747218567648, "deviation-center-line": 0.1578172363868847, "driven_lanedir_consec": 0.4900422295443247, "sim_compute_sim_state": 0.010334138517026548, "sim_compute_performance-ego0": 0.0021705553855425044}, "LF-norm-techtrack-000-ego0": {"driven_any": 4.498708115406316, "get_ui_image": 0.0348116730650266, "step_physics": 0.11231018188926908, "survival_time": 14.350000000000067, "driven_lanedir": 0.6881727514141713, "get_state_dump": 0.005341297222508324, "get_robot_state": 0.004060076342688667, "sim_render-ego0": 0.0042356691426701015, "get_duckie_state": 1.590285036298964e-06, "in-drivable-lane": 10.700000000000063, "deviation-heading": 1.9902418726395847, "agent_compute-ego0": 0.01408627794848548, "complete-iteration": 0.19231613890992272, "set_robot_commands": 0.0024591262141863504, "deviation-center-line": 0.25425249172588194, "driven_lanedir_consec": 0.6881727514141713, "sim_compute_sim_state": 0.01262678537103865, "sim_compute_performance-ego0": 0.0022795258296860587}, "LF-norm-small_loop-000-ego0": {"driven_any": 1.806231823538036, "get_ui_image": 0.02917938120663166, "step_physics": 0.10131096467375755, "survival_time": 6.349999999999985, "driven_lanedir": 0.6890992938312586, "get_state_dump": 0.005449287593364716, "get_robot_state": 0.004309583455324173, "sim_render-ego0": 0.004221860319375992, "get_duckie_state": 1.6987323760986328e-06, "in-drivable-lane": 3.599999999999987, "deviation-heading": 0.4888009382202221, "agent_compute-ego0": 0.04698152653872967, "complete-iteration": 0.2024592347443104, "set_robot_commands": 0.002674533054232598, "deviation-center-line": 0.0637013942585031, "driven_lanedir_consec": 0.6890992938312586, "sim_compute_sim_state": 0.005842886865139008, "sim_compute_performance-ego0": 0.00238032266497612}}
set_robot_commands_max0.002674533054232598
set_robot_commands_mean0.00245409804026705
set_robot_commands_median0.0024214367163769993
set_robot_commands_min0.0022989856740816043
sim_compute_performance-ego0_max0.00238032266497612
sim_compute_performance-ego0_mean0.0022337653688698903
sim_compute_performance-ego0_median0.0022250406076142815
sim_compute_performance-ego0_min0.0021046575952748782
sim_compute_sim_state_max0.01262678537103865
sim_compute_sim_state_mean0.00956679783698169
sim_compute_sim_state_median0.009898759555874556
sim_compute_sim_state_min0.005842886865139008
sim_render-ego0_max0.0042356691426701015
sim_render-ego0_mean0.004109228765141491
sim_render-ego0_median0.004167519256840517
sim_render-ego0_min0.003866207404214828
simulation-passed1
step_physics_max0.1209704213672214
step_physics_mean0.10835282254339838
step_physics_median0.10681057328151332
step_physics_min0.0988197222433455
survival_time_max14.350000000000067
survival_time_mean9.462500000000007
survival_time_min6.349999999999985
No reset possible
52322LFv-simerrorno0:04:30
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140404768645568
- M:video_aido:cmdline(in:/;out:/) 140404768663872
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52312LFv-simerrorno0:02:43
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139631607224064
- M:video_aido:cmdline(in:/;out:/) 139631607222432
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41725LFv-simsuccessno0:04:21
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41724LFv-simsuccessno0:04:24
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38175LFv-simerrorno0:00:40
The container "evalu [...]
The container "evaluator" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.49-py3.8.egg/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/media/sdb/duckietown-tmp-evals/production/aido5-LF-sim-validation/submission9319/LFv-sim-mont02-80325a328f54-1-job38175-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38173LFv-simerrorno0:00:42
The container "evalu [...]
The container "evaluator" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.49-py3.8.egg/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/media/sdb/duckietown-tmp-evals/production/aido5-LF-sim-validation/submission9319/LFv-sim-mont01-6ef51bb8a9d6-1-job38173-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36319LFv-simerrorno0:00:53
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9319/LFv-sim-Sandy1-sandy-1-job36319-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35755LFv-simsuccessno0:00:57
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35360LFv-simerrorno0:08:24
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9319/LFv-sim-reg04-c054faef3177-1-job35360:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9319/LFv-sim-reg04-c054faef3177-1-job35360/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9319/LFv-sim-reg04-c054faef3177-1-job35360/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9319/LFv-sim-reg04-c054faef3177-1-job35360/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9319/LFv-sim-reg04-c054faef3177-1-job35360/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9319/LFv-sim-reg04-c054faef3177-1-job35360/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34994LFv-simsuccessno0:09:19
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34672LFv-simsuccessno0:10:22
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34671LFv-simsuccessno0:10:07
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible