Duckietown Challenges Home Challenges Submissions

Submission 9323

Submission9323
Competingyes
Challengeaido5-LF-sim-validation
UserYishu Malhotra 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58185
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58185

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58185LFv-simsuccessyes0:43:18
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.0
survival_time_median59.99999999999873
deviation-center-line_median1.2422730096440104
in-drivable-lane_median0.0


other stats
agent_compute-ego0_max0.012595617006859317
agent_compute-ego0_mean0.01236823962987412
agent_compute-ego0_median0.012353922901899989
agent_compute-ego0_min0.0121694957088372
complete-iteration_max0.3363611690209966
complete-iteration_mean0.28176864974206933
complete-iteration_median0.284065745653062
complete-iteration_min0.2225819386411567
deviation-center-line_max4.053503393024394
deviation-center-line_mean1.731069028434027
deviation-center-line_min0.386226701423694
deviation-heading_max27.859809596736422
deviation-heading_mean14.925250437835436
deviation-heading_median14.309269950178932
deviation-heading_min3.22265225424745
driven_any_max2.6645352591003757e-13
driven_any_mean1.9984014443252818e-13
driven_any_median2.6645352591003757e-13
driven_any_min0.0
driven_lanedir_consec_max0.000286102294921875
driven_lanedir_consec_mean7.152557373046875e-05
driven_lanedir_consec_min0.0
driven_lanedir_max0.000286102294921875
driven_lanedir_mean7.152557373046875e-05
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max1.863476437990314e-06
get_duckie_state_mean1.751959671287314e-06
get_duckie_state_median1.7330509538356709e-06
get_duckie_state_min1.6782603394876014e-06
get_robot_state_max0.003639379409231016
get_robot_state_mean0.0035470604499511973
get_robot_state_median0.0035335715863230224
get_robot_state_min0.003481719217927728
get_state_dump_max0.004729015841075125
get_state_dump_mean0.004570281426178029
get_state_dump_median0.004565628839472152
get_state_dump_min0.004420852184692688
get_ui_image_max0.03602386037078527
get_ui_image_mean0.03071787350382237
get_ui_image_median0.030469775398406857
get_ui_image_min0.025908082847690504
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 2.6645352591003757e-13, "get_ui_image": 0.028541525635095957, "step_physics": 0.20195320821820845, "survival_time": 59.99999999999873, "driven_lanedir": 0.0, "get_state_dump": 0.004667327961854196, "get_robot_state": 0.003639379409231016, "sim_render-ego0": 0.0037328932902695833, "get_duckie_state": 1.6782603394876014e-06, "in-drivable-lane": 0.0, "deviation-heading": 22.66279310353771, "agent_compute-ego0": 0.012595617006859317, "complete-iteration": 0.26797898226633954, "set_robot_commands": 0.002188744096335126, "deviation-center-line": 4.053503393024394, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.008649085582444115, "sim_compute_performance-ego0": 0.0019378140010404945}, "LF-norm-zigzag-000-ego0": {"driven_any": 0.0, "get_ui_image": 0.03602386037078527, "step_physics": 0.260815267856671, "survival_time": 59.99999999999873, "driven_lanedir": 0.000286102294921875, "get_state_dump": 0.004420852184692688, "get_robot_state": 0.003499179160366646, "sim_render-ego0": 0.003640920494517914, "get_duckie_state": 1.7648136288200589e-06, "in-drivable-lane": 0.0, "deviation-heading": 27.859809596736422, "agent_compute-ego0": 0.012447947764972366, "complete-iteration": 0.3363611690209966, "set_robot_commands": 0.002091946550253329, "deviation-center-line": 1.0457540566888746, "driven_lanedir_consec": 0.000286102294921875, "sim_compute_sim_state": 0.011477335009547096, "sim_compute_performance-ego0": 0.001869105181030985}, "LF-norm-techtrack-000-ego0": {"driven_any": 2.6645352591003757e-13, "get_ui_image": 0.032398025161717754, "step_physics": 0.23118699718573807, "survival_time": 59.99999999999873, "driven_lanedir": 0.0, "get_state_dump": 0.004463929717090108, "get_robot_state": 0.003481719217927728, "sim_render-ego0": 0.003696054344272534, "get_duckie_state": 1.7012882788512829e-06, "in-drivable-lane": 0.0, "deviation-heading": 3.22265225424745, "agent_compute-ego0": 0.01225989803882761, "complete-iteration": 0.30015250903978435, "set_robot_commands": 0.002046920576262335, "deviation-center-line": 1.4387919625991463, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.008700908768881768, "sim_compute_performance-ego0": 0.0018448664882002423}, "LF-norm-small_loop-000-ego0": {"driven_any": 2.6645352591003757e-13, "get_ui_image": 0.025908082847690504, "step_physics": 0.16235226040378797, "survival_time": 59.99999999999873, "driven_lanedir": 0.0, "get_state_dump": 0.004729015841075125, "get_robot_state": 0.003567964012279399, "sim_render-ego0": 0.0036112968371770065, "get_duckie_state": 1.863476437990314e-06, "in-drivable-lane": 0.0, "deviation-heading": 5.955746796820156, "agent_compute-ego0": 0.0121694957088372, "complete-iteration": 0.2225819386411567, "set_robot_commands": 0.002246712169281946, "deviation-center-line": 0.386226701423694, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.006040846676155491, "sim_compute_performance-ego0": 0.0018822674350277967}}
set_robot_commands_max0.002246712169281946
set_robot_commands_mean0.0021435808480331836
set_robot_commands_median0.002140345323294227
set_robot_commands_min0.002046920576262335
sim_compute_performance-ego0_max0.0019378140010404945
sim_compute_performance-ego0_mean0.0018835132763248795
sim_compute_performance-ego0_median0.0018756863080293907
sim_compute_performance-ego0_min0.0018448664882002423
sim_compute_sim_state_max0.011477335009547096
sim_compute_sim_state_mean0.008717044009257118
sim_compute_sim_state_median0.008674997175662941
sim_compute_sim_state_min0.006040846676155491
sim_render-ego0_max0.0037328932902695833
sim_render-ego0_mean0.003670291241559259
sim_render-ego0_median0.003668487419395224
sim_render-ego0_min0.0036112968371770065
simulation-passed1
step_physics_max0.260815267856671
step_physics_mean0.21407693341610137
step_physics_median0.21657010270197324
step_physics_min0.16235226040378797
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
52280LFv-simerrorno0:11:12
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139684403047872
- M:video_aido:cmdline(in:/;out:/) 139684403044752
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41721LFv-simsuccessno0:09:23
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41720LFv-simsuccessno0:09:23
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38169LFv-simerrorno0:00:40
The container "evalu [...]
The container "evaluator" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.49-py3.8.egg/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/media/sdb/duckietown-tmp-evals/production/aido5-LF-sim-validation/submission9323/LFv-sim-mont02-80325a328f54-1-job38169-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36314LFv-simerrorno0:00:47
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9323/LFv-sim-Sandy1-sandy-1-job36314-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36311LFv-simerrorno0:00:50
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9323/LFv-sim-Sandy1-sandy-1-job36311-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35751LFv-simsuccessno0:00:59
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35750LFv-simsuccessno0:01:03
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35357LFv-simerrorno0:22:04
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9323/LFv-sim-reg02-1b92df2e7e91-1-job35357:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9323/LFv-sim-reg02-1b92df2e7e91-1-job35357/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9323/LFv-sim-reg02-1b92df2e7e91-1-job35357/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9323/LFv-sim-reg02-1b92df2e7e91-1-job35357/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9323/LFv-sim-reg02-1b92df2e7e91-1-job35357/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9323/LFv-sim-reg02-1b92df2e7e91-1-job35357/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34991LFv-simsuccessno0:23:07
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34682LFv-simsuccessno0:25:17
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34680LFv-simsuccessno0:25:15
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34679LFv-simsuccessno0:25:44
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible