Duckietown Challenges Home Challenges Submissions

Submission 9302

Submission9302
Competingyes
Challengeaido5-LF-sim-validation
UserYishu Malhotra 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58326
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58326

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58326LFv-simsuccessyes0:04:23
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.48892942179856824
survival_time_median5.249999999999989
deviation-center-line_median0.13191648840869452
in-drivable-lane_median3.2499999999999902


other stats
agent_compute-ego0_max0.013443577857244584
agent_compute-ego0_mean0.012320246533527526
agent_compute-ego0_median0.01213870418538288
agent_compute-ego0_min0.01155999990609976
complete-iteration_max0.1984870433807373
complete-iteration_mean0.16849383831667675
complete-iteration_median0.16627373537714826
complete-iteration_min0.1429408391316732
deviation-center-line_max0.1643959468172477
deviation-center-line_mean0.12557847619610993
deviation-center-line_min0.07408498114980296
deviation-heading_max1.3085060007319516
deviation-heading_mean0.8127054713150081
deviation-heading_median0.7946842392500675
deviation-heading_min0.35294740602794605
driven_any_max3.0258559942245338
driven_any_mean1.7592592311392878
driven_any_median1.470999276751933
driven_any_min1.0691823768287514
driven_lanedir_consec_max0.6494315759471684
driven_lanedir_consec_mean0.4751876951392385
driven_lanedir_consec_min0.27346036101264914
driven_lanedir_max0.6494315759471684
driven_lanedir_mean0.4751876951392385
driven_lanedir_median0.48892942179856824
driven_lanedir_min0.27346036101264914
get_duckie_state_max1.4333497910272507e-06
get_duckie_state_mean1.3439564217363422e-06
get_duckie_state_median1.40279786199586e-06
get_duckie_state_min1.136880171926398e-06
get_robot_state_max0.0037446305865333194
get_robot_state_mean0.0035335353952042205
get_robot_state_median0.0035524806405743984
get_robot_state_min0.0032845497131347655
get_state_dump_max0.004768260887690953
get_state_dump_mean0.004448561176459454
get_state_dump_median0.004445127340463492
get_state_dump_min0.004135729137219881
get_ui_image_max0.0320256096976144
get_ui_image_mean0.02886891717829869
get_ui_image_median0.029549279998027683
get_ui_image_min0.02435149901952499
in-drivable-lane_max8.100000000000005
in-drivable-lane_mean4.099999999999995
in-drivable-lane_min1.7999999999999936
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 3.0258559942245338, "get_ui_image": 0.02777752631749862, "step_physics": 0.0895409535139035, "survival_time": 9.700000000000005, "driven_lanedir": 0.27346036101264914, "get_state_dump": 0.004407147872142303, "get_robot_state": 0.0035177255288148536, "sim_render-ego0": 0.0035932259681897287, "get_duckie_state": 1.3730464837489984e-06, "in-drivable-lane": 8.100000000000005, "deviation-heading": 1.175100072434885, "agent_compute-ego0": 0.012671952369885569, "complete-iteration": 0.1547546631250626, "set_robot_commands": 0.0020198528583233173, "deviation-center-line": 0.1643959468172477, "driven_lanedir_consec": 0.27346036101264914, "sim_compute_sim_state": 0.009268862161880885, "sim_compute_performance-ego0": 0.0018740984109731824}, "LF-norm-zigzag-000-ego0": {"driven_any": 1.278825974691371, "get_ui_image": 0.031321033678556744, "step_physics": 0.1117925016503585, "survival_time": 4.699999999999991, "driven_lanedir": 0.3629620387619259, "get_state_dump": 0.004135729137219881, "get_robot_state": 0.0032845497131347655, "sim_render-ego0": 0.003365772648861534, "get_duckie_state": 1.136880171926398e-06, "in-drivable-lane": 2.8499999999999934, "deviation-heading": 1.3085060007319516, "agent_compute-ego0": 0.011605456000880192, "complete-iteration": 0.1777928076292339, "set_robot_commands": 0.0019026404932925576, "deviation-center-line": 0.1429278448490799, "driven_lanedir_consec": 0.3629620387619259, "sim_compute_sim_state": 0.008596636119641756, "sim_compute_performance-ego0": 0.001710066042448345}, "LF-norm-techtrack-000-ego0": {"driven_any": 1.0691823768287514, "get_ui_image": 0.0320256096976144, "step_physics": 0.12841823555174328, "survival_time": 4.149999999999993, "driven_lanedir": 0.6494315759471684, "get_state_dump": 0.004768260887690953, "get_robot_state": 0.0037446305865333194, "sim_render-ego0": 0.003752560842604864, "get_duckie_state": 1.4333497910272507e-06, "in-drivable-lane": 1.7999999999999936, "deviation-heading": 0.41426840606525006, "agent_compute-ego0": 0.013443577857244584, "complete-iteration": 0.1984870433807373, "set_robot_commands": 0.002264212994348435, "deviation-center-line": 0.07408498114980296, "driven_lanedir_consec": 0.6494315759471684, "sim_compute_sim_state": 0.007988418851579939, "sim_compute_performance-ego0": 0.0019928131784711567}, "LF-norm-small_loop-000-ego0": {"driven_any": 1.6631725788124958, "get_ui_image": 0.02435149901952499, "step_physics": 0.0861102475060357, "survival_time": 5.799999999999987, "driven_lanedir": 0.6148968048352106, "get_state_dump": 0.00448310680878468, "get_robot_state": 0.003587235752333943, "sim_render-ego0": 0.003682442200489533, "get_duckie_state": 1.4325492402427217e-06, "in-drivable-lane": 3.649999999999987, "deviation-heading": 0.35294740602794605, "agent_compute-ego0": 0.01155999990609976, "complete-iteration": 0.1429408391316732, "set_robot_commands": 0.002189952084141919, "deviation-center-line": 0.12090513196830915, "driven_lanedir_consec": 0.6148968048352106, "sim_compute_sim_state": 0.004921788843269022, "sim_compute_performance-ego0": 0.00196685546483749}}
set_robot_commands_max0.002264212994348435
set_robot_commands_mean0.002094164607526557
set_robot_commands_median0.002104902471232618
set_robot_commands_min0.0019026404932925576
sim_compute_performance-ego0_max0.0019928131784711567
sim_compute_performance-ego0_mean0.0018859582741825435
sim_compute_performance-ego0_median0.001920476937905336
sim_compute_performance-ego0_min0.001710066042448345
sim_compute_sim_state_max0.009268862161880885
sim_compute_sim_state_mean0.0076939264940929005
sim_compute_sim_state_median0.008292527485610848
sim_compute_sim_state_min0.004921788843269022
sim_render-ego0_max0.003752560842604864
sim_render-ego0_mean0.003598500415036415
sim_render-ego0_median0.003637834084339631
sim_render-ego0_min0.003365772648861534
simulation-passed1
step_physics_max0.12841823555174328
step_physics_mean0.10396548455551025
step_physics_median0.100666727582131
step_physics_min0.0861102475060357
survival_time_max9.700000000000005
survival_time_mean6.087499999999993
survival_time_min4.149999999999993
No reset possible
58317LFv-simsuccessyes0:04:41
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52344LFv-simerrorno0:04:12
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139706008305968
- M:video_aido:cmdline(in:/;out:/) 139709119984688
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41744LFv-simsuccessno0:04:40
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38220LFv-simsuccessno0:04:29
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36347LFv-simsuccessno0:05:32
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36344LFv-simerrorno0:00:51
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9302/LFv-sim-Sandy1-sandy-1-job36344-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35774LFv-simsuccessno0:01:08
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35376LFv-simerrorno0:09:24
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9302/LFv-sim-reg04-c054faef3177-1-job35376:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9302/LFv-sim-reg04-c054faef3177-1-job35376/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9302/LFv-sim-reg04-c054faef3177-1-job35376/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9302/LFv-sim-reg04-c054faef3177-1-job35376/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9302/LFv-sim-reg04-c054faef3177-1-job35376/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9302/LFv-sim-reg04-c054faef3177-1-job35376/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35009LFv-simsuccessno0:10:16
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34642LFv-simsuccessno0:10:41
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34641LFv-simsuccessno0:10:59
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible