Duckietown Challenges Home Challenges Submissions

Submission 4417

Submission4417
Competingyes
Challengeaido3-LF-sim-validation
UserLiam Paull 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
Jobsstep1-simulation: 27447
Next
User labelchallenge-aido_LF-baseline-duckietown
Admin priority50
Blessingn/a
User priority50

27447

Click the images to see detailed statistics about the episode.

ETHZ_autolab_technical_track-0-0

ETHZ_autolab_technical_track-1-0

ETHZ_autolab_technical_track-2-0

ETHZ_autolab_technical_track-3-0

ETHZ_autolab_technical_track-4-0

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
27447step1-simulationsuccessyes0:07:51
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median2.196995151719801
survival_time_median14.950000000000076
deviation-center-line_median0.7303090570032075
in-drivable-lane_median0


other stats
agent_compute-ego_max0.0225092876388366
agent_compute-ego_mean0.02193434437977741
agent_compute-ego_median0.02205894390741984
agent_compute-ego_min0.021028313636779785
deviation-center-line_max0.8509029705866005
deviation-center-line_mean0.5736805839416663
deviation-center-line_min0.1549771653432125
deviation-heading_max4.18431100046116
deviation-heading_mean1.8708480732251929
deviation-heading_median1.318719392461776
deviation-heading_min1.0535495420275331
driven_any_max2.349158135417712
driven_any_mean1.807559290023501
driven_any_median2.349051822764532
driven_any_min0.7101295572497025
driven_lanedir_consec_max2.3376212546131323
driven_lanedir_consec_mean1.743052933951566
driven_lanedir_consec_min0.6171866163420615
driven_lanedir_max2.3376212546131323
driven_lanedir_mean1.743052933951566
driven_lanedir_median2.196995151719801
driven_lanedir_min0.6171866163420615
in-drivable-lane_max0.2499999999999991
in-drivable-lane_mean0.08000000000000025
in-drivable-lane_min0
per-episodes
details{"ETHZ_autolab_technical_track-0-0": {"driven_any": 0.7101295572497025, "sim_physics": 0.08700459698836009, "survival_time": 4.799999999999991, "driven_lanedir": 0.6171866163420615, "sim_render-ego": 0.009790601829687754, "in-drivable-lane": 0.2499999999999991, "agent_compute-ego": 0.02205894390741984, "deviation-heading": 1.2930496610758715, "set_robot_commands": 0.008917286992073059, "deviation-center-line": 0.1549771653432125, "driven_lanedir_consec": 0.6171866163420615, "sim_compute_sim_state": 0.004502475261688232, "sim_compute_performance-ego": 0.005947420994440715, "sim_compute_robot_state-ego": 0.007316728432973226}, "ETHZ_autolab_technical_track-1-0": {"driven_any": 2.349148259886916, "sim_physics": 0.08135948181152344, "survival_time": 14.950000000000076, "driven_lanedir": 2.3334351084295557, "sim_render-ego": 0.009505672454833984, "in-drivable-lane": 0, "agent_compute-ego": 0.021028313636779785, "deviation-heading": 1.5046107700996223, "set_robot_commands": 0.008466286659240723, "deviation-center-line": 0.8509029705866005, "driven_lanedir_consec": 2.3334351084295557, "sim_compute_sim_state": 0.004760456085205078, "sim_compute_performance-ego": 0.0054647151629130045, "sim_compute_robot_state-ego": 0.007021607557932536}, "ETHZ_autolab_technical_track-2-0": {"driven_any": 2.349051822764532, "sim_physics": 0.07817450205485026, "survival_time": 14.950000000000076, "driven_lanedir": 2.196995151719801, "sim_render-ego": 0.00996528704961141, "in-drivable-lane": 0, "agent_compute-ego": 0.022421419620513916, "deviation-heading": 4.18431100046116, "set_robot_commands": 0.00932513395945231, "deviation-center-line": 0.7303090570032075, "driven_lanedir_consec": 2.196995151719801, "sim_compute_sim_state": 0.006376862525939941, "sim_compute_performance-ego": 0.005742798646291097, "sim_compute_robot_state-ego": 0.007435685793558757}, "ETHZ_autolab_technical_track-3-0": {"driven_any": 1.280308674798641, "sim_physics": 0.08387228236140976, "survival_time": 8.299999999999983, "driven_lanedir": 1.2300265386532798, "sim_render-ego": 0.010071533272065312, "in-drivable-lane": 0.15000000000000213, "agent_compute-ego": 0.0225092876388366, "deviation-heading": 1.0535495420275331, "set_robot_commands": 0.008748195257531592, "deviation-center-line": 0.3926569310357048, "driven_lanedir_consec": 1.2300265386532798, "sim_compute_sim_state": 0.004723902208259307, "sim_compute_performance-ego": 0.005831870688013284, "sim_compute_robot_state-ego": 0.007436842803495476}, "ETHZ_autolab_technical_track-4-0": {"driven_any": 2.349158135417712, "sim_physics": 0.07772388378779094, "survival_time": 14.950000000000076, "driven_lanedir": 2.3376212546131323, "sim_render-ego": 0.009701139132181805, "in-drivable-lane": 0, "agent_compute-ego": 0.021653757095336915, "deviation-heading": 1.318719392461776, "set_robot_commands": 0.00912797212600708, "deviation-center-line": 0.7395567957396065, "driven_lanedir_consec": 2.3376212546131323, "sim_compute_sim_state": 0.0043719808260599775, "sim_compute_performance-ego": 0.005574012597401937, "sim_compute_robot_state-ego": 0.00714266300201416}}
set_robot_commands_max0.00932513395945231
set_robot_commands_mean0.008916974998860952
set_robot_commands_median0.008917286992073059
set_robot_commands_min0.008466286659240723
sim_compute_performance-ego_max0.005947420994440715
sim_compute_performance-ego_mean0.005712163617812007
sim_compute_performance-ego_median0.005742798646291097
sim_compute_performance-ego_min0.0054647151629130045
sim_compute_robot_state-ego_max0.007436842803495476
sim_compute_robot_state-ego_mean0.007270705517994832
sim_compute_robot_state-ego_median0.007316728432973226
sim_compute_robot_state-ego_min0.007021607557932536
sim_compute_sim_state_max0.006376862525939941
sim_compute_sim_state_mean0.004947135381430508
sim_compute_sim_state_median0.004723902208259307
sim_compute_sim_state_min0.0043719808260599775
sim_physics_max0.08700459698836009
sim_physics_mean0.0816269494007869
sim_physics_median0.08135948181152344
sim_physics_min0.07772388378779094
sim_render-ego_max0.010071533272065312
sim_render-ego_mean0.009806846747676054
sim_render-ego_median0.009790601829687754
sim_render-ego_min0.009505672454833984
simulation-passed1
survival_time_max14.950000000000076
survival_time_mean11.59000000000004
survival_time_min4.799999999999991
No reset possible
25607step1-simulationsuccessno0:15:12
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
25605step1-simulationsuccessno0:15:32
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
25563step1-simulationhost-errorno0:14:30
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/project/src/duckietown_challenges_runner/runner.py", line 648, in get_cr
    wd, aws_config, copy_to_machine_cache=copy_to_machine_cache
  File "/project/src/duckietown_challenges_runner/runner.py", line 1309, in upload_files
    aws_config, toupload, copy_to_machine_cache=copy_to_machine_cache
  File "/project/src/duckietown_challenges_runner/runner.py", line 1563, in upload
    sha256hex = compute_sha256hex(realfile)
  File "/project/src/duckietown_challenges_runner/runner.py", line 1619, in compute_sha256hex
    res: bytes = subprocess.check_output(cmd)
  File "/usr/lib/python3.6/subprocess.py", line 356, in check_output
    **kwargs).stdout
  File "/usr/lib/python3.6/subprocess.py", line 438, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['shasum', '-a', '256', '/tmp/duckietown/DT18/evaluator/executions/aido3-LF-sim-validation/submission4417/step1-simulation-ip-172-31-46-148-10491-job25563/challenge-evaluation-output/episodes/ETHZ_autolab_technical_track-1-0/camera.mp4.metadata.yaml']' returned non-zero exit status 1.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
25252step1-simulationsuccessno0:07:15
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible