Duckietown Challenges Home Challenges Submissions

Submission 5002

Submission5002
Competingyes
Challengeaido3-LF-sim-validation
UserRey Wiyatno 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
Jobsstep1-simulation: 27798
Next
User labelchallenge-aido_LF-baseline-duckietown
Admin priority50
Blessingn/a
User priority50

27798

Click the images to see detailed statistics about the episode.

ETHZ_autolab_technical_track-0-0

ETHZ_autolab_technical_track-1-0

ETHZ_autolab_technical_track-2-0

ETHZ_autolab_technical_track-3-0

ETHZ_autolab_technical_track-4-0

Evaluation jobs for this submission

See previous jobs for previous versions of challenges
Job IDstepstatusup to datedate starteddate completeddurationmessage
27798step1-simulationsuccessyes0:10:39
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median3.228631123448489
survival_time_median14.950000000000076
deviation-center-line_median0.660491647214211
in-drivable-lane_median1.9000000000000172


other stats
agent_compute-ego_max0.047627152601877845
agent_compute-ego_mean0.04520170386632284
agent_compute-ego_median0.046135979493459066
agent_compute-ego_min0.041410404046376546
deviation-center-line_max0.8038989128126163
deviation-center-line_mean0.6841773519038494
deviation-center-line_min0.5266507055554578
deviation-heading_max3.859435908867213
deviation-heading_mean3.097740842147828
deviation-heading_median3.390041394078433
deviation-heading_min2.045449470202208
driven_any_max5.13557431186407
driven_any_mean4.006726537980654
driven_any_median4.036228601952118
driven_any_min3.214375565183025
driven_lanedir_consec_max5.070024939716314
driven_lanedir_consec_mean3.406264579960059
driven_lanedir_consec_min1.8670651109614973
driven_lanedir_max5.070024939716314
driven_lanedir_mean3.796417087602367
driven_lanedir_median3.844471589483445
driven_lanedir_min2.911519630953274
in-drivable-lane_max2.250000000000032
in-drivable-lane_mean1.2100000000000153
in-drivable-lane_min0
per-episodes
details{"ETHZ_autolab_technical_track-0-0": {"driven_any": 3.214375565183025, "sim_physics": 0.06834197282791138, "survival_time": 14.950000000000076, "driven_lanedir": 2.911519630953274, "sim_render-ego": 0.011670746008555097, "in-drivable-lane": 1.9000000000000172, "agent_compute-ego": 0.041410404046376546, "deviation-heading": 3.390041394078433, "set_robot_commands": 0.008784444332122802, "deviation-center-line": 0.6313692913603811, "driven_lanedir_consec": 2.911519630953274, "sim_compute_sim_state": 0.004932911396026612, "sim_compute_performance-ego": 0.007283503214518229, "sim_compute_robot_state-ego": 0.008368744055430094}, "ETHZ_autolab_technical_track-1-0": {"driven_any": 3.531328159552064, "sim_physics": 0.06606185356775919, "survival_time": 14.950000000000076, "driven_lanedir": 3.2019871831380873, "sim_render-ego": 0.011844131151835123, "in-drivable-lane": 2.250000000000032, "agent_compute-ego": 0.04456071535746257, "deviation-heading": 3.410655528345244, "set_robot_commands": 0.008437997500101725, "deviation-center-line": 0.8038989128126163, "driven_lanedir_consec": 1.8670651109614973, "sim_compute_sim_state": 0.0048477411270141605, "sim_compute_performance-ego": 0.0070063249270121255, "sim_compute_robot_state-ego": 0.008352364699045817}, "ETHZ_autolab_technical_track-2-0": {"driven_any": 5.13557431186407, "sim_physics": 0.06673817793528239, "survival_time": 14.950000000000076, "driven_lanedir": 5.070024939716314, "sim_render-ego": 0.01172438939412435, "in-drivable-lane": 0, "agent_compute-ego": 0.046135979493459066, "deviation-heading": 2.045449470202208, "set_robot_commands": 0.008807384173075358, "deviation-center-line": 0.7984762025765813, "driven_lanedir_consec": 5.070024939716314, "sim_compute_sim_state": 0.0049381256103515625, "sim_compute_performance-ego": 0.007146817843119303, "sim_compute_robot_state-ego": 0.008480676809946696}, "ETHZ_autolab_technical_track-3-0": {"driven_any": 4.036228601952118, "sim_physics": 0.06828269322713217, "survival_time": 14.950000000000076, "driven_lanedir": 3.9540820947207185, "sim_render-ego": 0.011895328362782795, "in-drivable-lane": 0, "agent_compute-ego": 0.046274267832438154, "deviation-heading": 2.783121909246042, "set_robot_commands": 0.008705720901489258, "deviation-center-line": 0.5266507055554578, "driven_lanedir_consec": 3.9540820947207185, "sim_compute_sim_state": 0.005077759424845378, "sim_compute_performance-ego": 0.00748047669728597, "sim_compute_robot_state-ego": 0.008480219841003419}, "ETHZ_autolab_technical_track-4-0": {"driven_any": 4.116126051351996, "sim_physics": 0.06854869445164999, "survival_time": 14.950000000000076, "driven_lanedir": 3.844471589483445, "sim_render-ego": 0.012187442779541017, "in-drivable-lane": 1.900000000000027, "agent_compute-ego": 0.047627152601877845, "deviation-heading": 3.859435908867213, "set_robot_commands": 0.009055482546488443, "deviation-center-line": 0.660491647214211, "driven_lanedir_consec": 3.228631123448489, "sim_compute_sim_state": 0.005110837618509928, "sim_compute_performance-ego": 0.007486874262491862, "sim_compute_robot_state-ego": 0.008943485418955484}}
set_robot_commands_max0.009055482546488443
set_robot_commands_mean0.008758205890655516
set_robot_commands_median0.008784444332122802
set_robot_commands_min0.008437997500101725
sim_compute_performance-ego_max0.007486874262491862
sim_compute_performance-ego_mean0.007280799388885498
sim_compute_performance-ego_median0.007283503214518229
sim_compute_performance-ego_min0.0070063249270121255
sim_compute_robot_state-ego_max0.008943485418955484
sim_compute_robot_state-ego_mean0.008525098164876302
sim_compute_robot_state-ego_median0.008480219841003419
sim_compute_robot_state-ego_min0.008352364699045817
sim_compute_sim_state_max0.005110837618509928
sim_compute_sim_state_mean0.004981475035349528
sim_compute_sim_state_median0.0049381256103515625
sim_compute_sim_state_min0.0048477411270141605
sim_physics_max0.06854869445164999
sim_physics_mean0.06759467840194702
sim_physics_median0.06828269322713217
sim_physics_min0.06606185356775919
sim_render-ego_max0.012187442779541017
sim_render-ego_mean0.011864407539367676
sim_render-ego_median0.011844131151835123
sim_render-ego_min0.011670746008555097
simulation-passed1
survival_time_max14.950000000000076
survival_time_mean14.950000000000076
survival_time_min14.950000000000076
No reset possible
27797step1-simulationsuccessyes0:07:21
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
27796step1-simulationhost-erroryes0:13:10
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/project/src/duckietown_challenges_runner/runner.py", line 648, in get_cr
    wd, aws_config, copy_to_machine_cache=copy_to_machine_cache
  File "/project/src/duckietown_challenges_runner/runner.py", line 1309, in upload_files
    aws_config, toupload, copy_to_machine_cache=copy_to_machine_cache
  File "/project/src/duckietown_challenges_runner/runner.py", line 1563, in upload
    sha256hex = compute_sha256hex(realfile)
  File "/project/src/duckietown_challenges_runner/runner.py", line 1619, in compute_sha256hex
    res: bytes = subprocess.check_output(cmd)
  File "/usr/lib/python3.6/subprocess.py", line 356, in check_output
    **kwargs).stdout
  File "/usr/lib/python3.6/subprocess.py", line 438, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['shasum', '-a', '256', '/tmp/duckietown/DT18/evaluator/executions/aido3-LF-sim-validation/submission5002/step1-simulation-ip-172-31-46-148-10492-job27796/challenge-evaluation-output/episodes/ETHZ_autolab_technical_track-4-0/camera.mp4']' returned non-zero exit status 1.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible