Duckietown Challenges Home Challenges Submissions

Submission 5262

Submission5262
Competingyes
Challengeaido3-LF-sim-validation
UserRey Wiyatno 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
Jobsstep1-simulation: 28227
Next
User labelchallenge-aido_LF-baseline-duckietown
Admin priority50
Blessingn/a
User priority50

28227

Click the images to see detailed statistics about the episode.

ETHZ_autolab_technical_track-0-0

ETHZ_autolab_technical_track-1-0

ETHZ_autolab_technical_track-2-0

ETHZ_autolab_technical_track-3-0

ETHZ_autolab_technical_track-4-0

Evaluation jobs for this submission

See previous jobs for previous versions of challenges
Job IDstepstatusup to datedate starteddate completeddurationmessage
28227step1-simulationsuccessyes0:10:25
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median3.3522835408243146
survival_time_median14.950000000000076
deviation-center-line_median0.6983479778030038
in-drivable-lane_median1.099999999999996


other stats
agent_compute-ego_max0.05595437367757161
agent_compute-ego_mean0.05312215260720765
agent_compute-ego_median0.052971528126643255
agent_compute-ego_min0.05054732407031416
deviation-center-line_max0.884954070261323
deviation-center-line_mean0.6992992454558573
deviation-center-line_min0.5306097005479814
deviation-heading_max3.767500739502179
deviation-heading_mean2.62149195788097
deviation-heading_median2.802259467350567
deviation-heading_min1.3176926613380282
driven_any_max4.693126255733225
driven_any_mean3.5720091692846907
driven_any_median3.671661791181552
driven_any_min1.830677366593908
driven_lanedir_consec_max4.20412410937212
driven_lanedir_consec_mean3.124999445770339
driven_lanedir_consec_min1.409507198166784
driven_lanedir_max4.420175272652635
driven_lanedir_mean3.247078784682801
driven_lanedir_median3.3522835408243146
driven_lanedir_min1.409507198166784
in-drivable-lane_max3.100000000000044
in-drivable-lane_mean1.4900000000000104
in-drivable-lane_min0.30000000000000426
per-episodes
details{"ETHZ_autolab_technical_track-0-0": {"driven_any": 3.671661791181552, "sim_physics": 0.08363686005274455, "survival_time": 14.950000000000076, "driven_lanedir": 3.3522835408243146, "sim_render-ego": 0.014153429667154948, "in-drivable-lane": 1.099999999999996, "agent_compute-ego": 0.05196758190790812, "deviation-heading": 3.767500739502179, "set_robot_commands": 0.010253280798594156, "deviation-center-line": 0.884954070261323, "driven_lanedir_consec": 3.3522835408243146, "sim_compute_sim_state": 0.006557801564534505, "sim_compute_performance-ego": 0.008499125639597574, "sim_compute_robot_state-ego": 0.01055187225341797}, "ETHZ_autolab_technical_track-1-0": {"driven_any": 3.3039282684918234, "sim_physics": 0.07859285996884716, "survival_time": 14.700000000000074, "driven_lanedir": 2.849303802398152, "sim_render-ego": 0.012618296811369813, "in-drivable-lane": 3.100000000000044, "agent_compute-ego": 0.05054732407031416, "deviation-heading": 3.224198355569756, "set_robot_commands": 0.00954004939721555, "deviation-center-line": 0.820424978903238, "driven_lanedir_consec": 2.849303802398152, "sim_compute_sim_state": 0.005683602929926243, "sim_compute_performance-ego": 0.007704369064901961, "sim_compute_robot_state-ego": 0.009456311764360284}, "ETHZ_autolab_technical_track-2-0": {"driven_any": 1.830677366593908, "sim_physics": 0.08153601292963628, "survival_time": 7.149999999999983, "driven_lanedir": 1.409507198166784, "sim_render-ego": 0.014058304833365488, "in-drivable-lane": 1.8999999999999932, "agent_compute-ego": 0.052971528126643255, "deviation-heading": 1.3176926613380282, "set_robot_commands": 0.010265475386506196, "deviation-center-line": 0.5306097005479814, "driven_lanedir_consec": 1.409507198166784, "sim_compute_sim_state": 0.006434899110060472, "sim_compute_performance-ego": 0.008671675528679694, "sim_compute_robot_state-ego": 0.010540482047554495}, "ETHZ_autolab_technical_track-3-0": {"driven_any": 4.693126255733225, "sim_physics": 0.08197489897410075, "survival_time": 14.950000000000076, "driven_lanedir": 4.420175272652635, "sim_render-ego": 0.013757298787434896, "in-drivable-lane": 1.050000000000015, "agent_compute-ego": 0.05416995525360107, "deviation-heading": 2.802259467350567, "set_robot_commands": 0.009898571968078614, "deviation-center-line": 0.6983479778030038, "driven_lanedir_consec": 3.809778578090324, "sim_compute_sim_state": 0.006363832155863444, "sim_compute_performance-ego": 0.00850698709487915, "sim_compute_robot_state-ego": 0.010011696815490722}, "ETHZ_autolab_technical_track-4-0": {"driven_any": 4.3606521644229455, "sim_physics": 0.08393346468607585, "survival_time": 14.950000000000076, "driven_lanedir": 4.20412410937212, "sim_render-ego": 0.013853347301483156, "in-drivable-lane": 0.30000000000000426, "agent_compute-ego": 0.05595437367757161, "deviation-heading": 1.99580856564432, "set_robot_commands": 0.010508958498636882, "deviation-center-line": 0.5621594997637399, "driven_lanedir_consec": 4.20412410937212, "sim_compute_sim_state": 0.006496437390645345, "sim_compute_performance-ego": 0.008482121626536051, "sim_compute_robot_state-ego": 0.010102113882700605}}
set_robot_commands_max0.010508958498636882
set_robot_commands_mean0.01009326720980628
set_robot_commands_median0.010253280798594156
set_robot_commands_min0.00954004939721555
sim_compute_performance-ego_max0.008671675528679694
sim_compute_performance-ego_mean0.008372855790918888
sim_compute_performance-ego_median0.008499125639597574
sim_compute_performance-ego_min0.007704369064901961
sim_compute_robot_state-ego_max0.01055187225341797
sim_compute_robot_state-ego_mean0.010132495352704817
sim_compute_robot_state-ego_median0.010102113882700605
sim_compute_robot_state-ego_min0.009456311764360284
sim_compute_sim_state_max0.006557801564534505
sim_compute_sim_state_mean0.006307314630206002
sim_compute_sim_state_median0.006434899110060472
sim_compute_sim_state_min0.005683602929926243
sim_physics_max0.08393346468607585
sim_physics_mean0.0819348193222809
sim_physics_median0.08197489897410075
sim_physics_min0.07859285996884716
sim_render-ego_max0.014153429667154948
sim_render-ego_mean0.01368813548016166
sim_render-ego_median0.013853347301483156
sim_render-ego_min0.012618296811369813
simulation-passed1
survival_time_max14.950000000000076
survival_time_mean13.340000000000057
survival_time_min7.149999999999983
No reset possible
28226step1-simulationsuccessyes0:07:09
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
28225step1-simulationhost-erroryes0:09:26
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/project/src/duckietown_challenges_runner/runner.py", line 648, in get_cr
    wd, aws_config, copy_to_machine_cache=copy_to_machine_cache
  File "/project/src/duckietown_challenges_runner/runner.py", line 1309, in upload_files
    aws_config, toupload, copy_to_machine_cache=copy_to_machine_cache
  File "/project/src/duckietown_challenges_runner/runner.py", line 1563, in upload
    sha256hex = compute_sha256hex(realfile)
  File "/project/src/duckietown_challenges_runner/runner.py", line 1619, in compute_sha256hex
    res: bytes = subprocess.check_output(cmd)
  File "/usr/lib/python3.6/subprocess.py", line 356, in check_output
    **kwargs).stdout
  File "/usr/lib/python3.6/subprocess.py", line 438, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['shasum', '-a', '256', '/tmp/duckietown/DT18/evaluator/executions/aido3-LF-sim-validation/submission5262/step1-simulation-ip-172-31-46-148-10490-job28225/challenge-evaluation-output/episodes/ETHZ_autolab_technical_track-2-0/camera.mp4']' returned non-zero exit status 1.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible