Duckietown Challenges Home Challenges Submissions

Submission 5085

Submission5085
Competingyes
Challengeaido3-LF-sim-validation
UserRey Wiyatno 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
Jobsstep1-simulation: 27959
Next
User labelchallenge-aido_LF-baseline-duckietown
Admin priority50
Blessingn/a
User priority50

27959

Click the images to see detailed statistics about the episode.

ETHZ_autolab_technical_track-0-0

ETHZ_autolab_technical_track-1-0

ETHZ_autolab_technical_track-2-0

ETHZ_autolab_technical_track-3-0

ETHZ_autolab_technical_track-4-0

Evaluation jobs for this submission

See previous jobs for previous versions of challenges
Job IDstepstatusup to datedate starteddate completeddurationmessage
27959step1-simulationsuccessyes0:09:42
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median2.5069651895044895
survival_time_median14.950000000000076
deviation-center-line_median0.604795864182004
in-drivable-lane_median1.2499999999999956


other stats
agent_compute-ego_max0.05524349530537923
agent_compute-ego_mean0.05077271080017091
agent_compute-ego_median0.05147205511728922
agent_compute-ego_min0.04659991979598999
deviation-center-line_max0.8766211599158618
deviation-center-line_mean0.6419770409899535
deviation-center-line_min0.5171253175048868
deviation-heading_max4.615553458452684
deviation-heading_mean3.70540751629475
deviation-heading_median3.617894494100888
deviation-heading_min2.5013156638821408
driven_any_max4.126580199760841
driven_any_mean3.158345234789768
driven_any_median3.26972261465022
driven_any_min2.2301158669319863
driven_lanedir_consec_max4.058642421454366
driven_lanedir_consec_mean2.586271706051054
driven_lanedir_consec_min1.522395877694296
driven_lanedir_max4.058642421454366
driven_lanedir_mean2.9403438604728436
driven_lanedir_median2.9992447266111744
driven_lanedir_min1.8989920811815693
in-drivable-lane_max1.6500000000000234
in-drivable-lane_mean0.8400000000000029
in-drivable-lane_min0
per-episodes
details{"ETHZ_autolab_technical_track-0-0": {"driven_any": 2.6194647958572683, "sim_physics": 0.06847025791803996, "survival_time": 14.950000000000076, "driven_lanedir": 2.5069651895044895, "sim_render-ego": 0.011310826937357583, "in-drivable-lane": 0, "agent_compute-ego": 0.04659991979598999, "deviation-heading": 3.617894494100888, "set_robot_commands": 0.008877373536427816, "deviation-center-line": 0.5802005723780623, "driven_lanedir_consec": 2.5069651895044895, "sim_compute_sim_state": 0.004996332327524821, "sim_compute_performance-ego": 0.006978635787963867, "sim_compute_robot_state-ego": 0.00846405824025472}, "ETHZ_autolab_technical_track-1-0": {"driven_any": 2.2301158669319863, "sim_physics": 0.0660188635190328, "survival_time": 14.950000000000076, "driven_lanedir": 1.8989920811815693, "sim_render-ego": 0.011113768418629964, "in-drivable-lane": 1.6500000000000234, "agent_compute-ego": 0.04698974609375, "deviation-heading": 4.615553458452684, "set_robot_commands": 0.008351968924204508, "deviation-center-line": 0.604795864182004, "driven_lanedir_consec": 1.8441103149909437, "sim_compute_sim_state": 0.0049169230461120605, "sim_compute_performance-ego": 0.006606684525807698, "sim_compute_robot_state-ego": 0.00813500960667928}, "ETHZ_autolab_technical_track-2-0": {"driven_any": 3.26972261465022, "sim_physics": 0.06905198971430461, "survival_time": 14.950000000000076, "driven_lanedir": 2.9992447266111744, "sim_render-ego": 0.01118696371714274, "in-drivable-lane": 1.2499999999999956, "agent_compute-ego": 0.05147205511728922, "deviation-heading": 3.438340393329774, "set_robot_commands": 0.008714619477589926, "deviation-center-line": 0.8766211599158618, "driven_lanedir_consec": 2.9992447266111744, "sim_compute_sim_state": 0.005017382303873698, "sim_compute_performance-ego": 0.006875677903493246, "sim_compute_robot_state-ego": 0.008429925441741943}, "ETHZ_autolab_technical_track-3-0": {"driven_any": 3.5458426967485233, "sim_physics": 0.07054197231928508, "survival_time": 14.950000000000076, "driven_lanedir": 3.2378748836126183, "sim_render-ego": 0.011918127536773682, "in-drivable-lane": 1.2999999999999954, "agent_compute-ego": 0.05355833768844605, "deviation-heading": 4.353933571708266, "set_robot_commands": 0.009302276770273845, "deviation-center-line": 0.6311422909689522, "driven_lanedir_consec": 1.522395877694296, "sim_compute_sim_state": 0.005238259633382161, "sim_compute_performance-ego": 0.007294990221659342, "sim_compute_robot_state-ego": 0.009425973892211914}, "ETHZ_autolab_technical_track-4-0": {"driven_any": 4.126580199760841, "sim_physics": 0.07115794022878011, "survival_time": 14.950000000000076, "driven_lanedir": 4.058642421454366, "sim_render-ego": 0.012216036319732664, "in-drivable-lane": 0, "agent_compute-ego": 0.05524349530537923, "deviation-heading": 2.5013156638821408, "set_robot_commands": 0.009433409372965496, "deviation-center-line": 0.5171253175048868, "driven_lanedir_consec": 4.058642421454366, "sim_compute_sim_state": 0.00543471892674764, "sim_compute_performance-ego": 0.00737693707148234, "sim_compute_robot_state-ego": 0.00897936741511027}}
set_robot_commands_max0.009433409372965496
set_robot_commands_mean0.008935929616292318
set_robot_commands_median0.008877373536427816
set_robot_commands_min0.008351968924204508
sim_compute_performance-ego_max0.00737693707148234
sim_compute_performance-ego_mean0.007026585102081298
sim_compute_performance-ego_median0.006978635787963867
sim_compute_performance-ego_min0.006606684525807698
sim_compute_robot_state-ego_max0.009425973892211914
sim_compute_robot_state-ego_mean0.008686866919199624
sim_compute_robot_state-ego_median0.00846405824025472
sim_compute_robot_state-ego_min0.00813500960667928
sim_compute_sim_state_max0.00543471892674764
sim_compute_sim_state_mean0.005120723247528077
sim_compute_sim_state_median0.005017382303873698
sim_compute_sim_state_min0.0049169230461120605
sim_physics_max0.07115794022878011
sim_physics_mean0.06904820473988851
sim_physics_median0.06905198971430461
sim_physics_min0.0660188635190328
sim_render-ego_max0.012216036319732664
sim_render-ego_mean0.011549144585927328
sim_render-ego_median0.011310826937357583
sim_render-ego_min0.011113768418629964
simulation-passed1
survival_time_max14.950000000000076
survival_time_mean14.950000000000076
survival_time_min14.950000000000076
No reset possible
27958step1-simulationsuccessyes0:09:06
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
27949step1-simulationhost-erroryes0:09:46
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/project/src/duckietown_challenges_runner/runner.py", line 648, in get_cr
    wd, aws_config, copy_to_machine_cache=copy_to_machine_cache
  File "/project/src/duckietown_challenges_runner/runner.py", line 1309, in upload_files
    aws_config, toupload, copy_to_machine_cache=copy_to_machine_cache
  File "/project/src/duckietown_challenges_runner/runner.py", line 1563, in upload
    sha256hex = compute_sha256hex(realfile)
  File "/project/src/duckietown_challenges_runner/runner.py", line 1619, in compute_sha256hex
    res: bytes = subprocess.check_output(cmd)
  File "/usr/lib/python3.6/subprocess.py", line 356, in check_output
    **kwargs).stdout
  File "/usr/lib/python3.6/subprocess.py", line 438, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['shasum', '-a', '256', '/tmp/duckietown/DT18/evaluator/executions/aido3-LF-sim-validation/submission5085/step1-simulation-ip-172-31-35-218-10490-job27949/tmp/tmp87j_yx7i/episodes/ETHZ_autolab_technical_track-4-0/camera.mp4']' returned non-zero exit status 1.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible