Duckietown Challenges Home Challenges Submissions

Submission 4979

Submission4979
Competingyes
Challengeaido3-LF-sim-validation
UserDhaivat Bhatt 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
Jobsstep1-simulation: 27750
Next
User labelwhattheduck
Admin priority50
Blessingn/a
User priority50

27750

Click the images to see detailed statistics about the episode.

ETHZ_autolab_technical_track-0-0

ETHZ_autolab_technical_track-1-0

ETHZ_autolab_technical_track-2-0

ETHZ_autolab_technical_track-3-0

ETHZ_autolab_technical_track-4-0

Evaluation jobs for this submission

See previous jobs for previous versions of challenges
Job IDstepstatusup to datedate starteddate completeddurationmessage
27750step1-simulationsuccessyes0:07:08
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median4.082115406634032
survival_time_median14.950000000000076
deviation-center-line_median0.8733208408516854
in-drivable-lane_median0


other stats
agent_compute-ego_max0.016230497874465642
agent_compute-ego_mean0.01582370879603367
agent_compute-ego_median0.015786104202270508
agent_compute-ego_min0.015611329078674316
deviation-center-line_max1.1988153842309797
deviation-center-line_mean0.8891810953637013
deviation-center-line_min0.457119079459828
deviation-heading_max4.603896296510659
deviation-heading_mean3.3745146533731103
deviation-heading_median3.4462507507869806
deviation-heading_min2.026399128496376
driven_any_max5.235610564103697
driven_any_mean4.2073329901943355
driven_any_median4.337221695995397
driven_any_min2.511941592544562
driven_lanedir_consec_max4.774594144862989
driven_lanedir_consec_mean3.966444717313189
driven_lanedir_consec_min2.372785413025941
driven_lanedir_max4.774594144862989
driven_lanedir_mean3.966444717313189
driven_lanedir_median4.082115406634032
driven_lanedir_min2.372785413025941
in-drivable-lane_max1.149999999999996
in-drivable-lane_mean0.2599999999999996
in-drivable-lane_min0
per-episodes
details{"ETHZ_autolab_technical_track-0-0": {"driven_any": 2.511941592544562, "sim_physics": 0.06012433066087611, "survival_time": 10.20000000000001, "driven_lanedir": 2.372785413025941, "sim_render-ego": 0.007164530894335578, "in-drivable-lane": 0.15000000000000213, "agent_compute-ego": 0.016230497874465642, "deviation-heading": 2.026399128496376, "set_robot_commands": 0.006796944375131645, "deviation-center-line": 0.457119079459828, "driven_lanedir_consec": 2.372785413025941, "sim_compute_sim_state": 0.0040878186038896145, "sim_compute_performance-ego": 0.004206178235072715, "sim_compute_robot_state-ego": 0.005180877797743853}, "ETHZ_autolab_technical_track-1-0": {"driven_any": 4.259173495200878, "sim_physics": 0.05706097443898519, "survival_time": 14.950000000000076, "driven_lanedir": 4.054713116408172, "sim_render-ego": 0.00722086509068807, "in-drivable-lane": 0, "agent_compute-ego": 0.015786104202270508, "deviation-heading": 4.001803927759017, "set_robot_commands": 0.0068158102035522465, "deviation-center-line": 1.1988153842309797, "driven_lanedir_consec": 4.054713116408172, "sim_compute_sim_state": 0.00369277556737264, "sim_compute_performance-ego": 0.004105294545491536, "sim_compute_robot_state-ego": 0.005064655145009359}, "ETHZ_autolab_technical_track-2-0": {"driven_any": 5.235610564103697, "sim_physics": 0.05892051378885905, "survival_time": 14.950000000000076, "driven_lanedir": 4.774594144862989, "sim_render-ego": 0.007201290925343831, "in-drivable-lane": 1.149999999999996, "agent_compute-ego": 0.015611329078674316, "deviation-heading": 2.794223163312519, "set_robot_commands": 0.006816196441650391, "deviation-center-line": 0.8733208408516854, "driven_lanedir_consec": 4.774594144862989, "sim_compute_sim_state": 0.003451952934265137, "sim_compute_performance-ego": 0.004103095531463623, "sim_compute_robot_state-ego": 0.00510380744934082}, "ETHZ_autolab_technical_track-3-0": {"driven_any": 4.692717603127144, "sim_physics": 0.05725670337677002, "survival_time": 14.950000000000076, "driven_lanedir": 4.548015505634812, "sim_render-ego": 0.007187071641286214, "in-drivable-lane": 0, "agent_compute-ego": 0.015821771621704103, "deviation-heading": 3.4462507507869806, "set_robot_commands": 0.006763835748036702, "deviation-center-line": 0.7847633637907224, "driven_lanedir_consec": 4.548015505634812, "sim_compute_sim_state": 0.003666075865427653, "sim_compute_performance-ego": 0.004074848492940267, "sim_compute_robot_state-ego": 0.00508908748626709}, "ETHZ_autolab_technical_track-4-0": {"driven_any": 4.337221695995397, "sim_physics": 0.06060766140619914, "survival_time": 14.950000000000076, "driven_lanedir": 4.082115406634032, "sim_render-ego": 0.007198765277862549, "in-drivable-lane": 0, "agent_compute-ego": 0.015668841203053792, "deviation-heading": 4.603896296510659, "set_robot_commands": 0.006791080633799235, "deviation-center-line": 1.1318868084852909, "driven_lanedir_consec": 4.082115406634032, "sim_compute_sim_state": 0.003625931739807129, "sim_compute_performance-ego": 0.0041128333409627274, "sim_compute_robot_state-ego": 0.0051438307762146}}
set_robot_commands_max0.006816196441650391
set_robot_commands_mean0.006796773480434043
set_robot_commands_median0.006796944375131645
set_robot_commands_min0.006763835748036702
sim_compute_performance-ego_max0.004206178235072715
sim_compute_performance-ego_mean0.004120450029186174
sim_compute_performance-ego_median0.004105294545491536
sim_compute_performance-ego_min0.004074848492940267
sim_compute_robot_state-ego_max0.005180877797743853
sim_compute_robot_state-ego_mean0.0051164517309151445
sim_compute_robot_state-ego_median0.00510380744934082
sim_compute_robot_state-ego_min0.005064655145009359
sim_compute_sim_state_max0.0040878186038896145
sim_compute_sim_state_mean0.0037049109421524343
sim_compute_sim_state_median0.003666075865427653
sim_compute_sim_state_min0.003451952934265137
sim_physics_max0.06060766140619914
sim_physics_mean0.0587940367343379
sim_physics_median0.05892051378885905
sim_physics_min0.05706097443898519
sim_render-ego_max0.00722086509068807
sim_render-ego_mean0.00719450476590325
sim_render-ego_median0.007198765277862549
sim_render-ego_min0.007164530894335578
simulation-passed1
survival_time_max14.950000000000076
survival_time_mean14.000000000000062
survival_time_min10.20000000000001
No reset possible
27749step1-simulationsuccessyes0:08:57
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
27747step1-simulationhost-erroryes0:07:34
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/project/src/duckietown_challenges_runner/runner.py", line 648, in get_cr
    wd, aws_config, copy_to_machine_cache=copy_to_machine_cache
  File "/project/src/duckietown_challenges_runner/runner.py", line 1309, in upload_files
    aws_config, toupload, copy_to_machine_cache=copy_to_machine_cache
  File "/project/src/duckietown_challenges_runner/runner.py", line 1563, in upload
    sha256hex = compute_sha256hex(realfile)
  File "/project/src/duckietown_challenges_runner/runner.py", line 1619, in compute_sha256hex
    res: bytes = subprocess.check_output(cmd)
  File "/usr/lib/python3.6/subprocess.py", line 356, in check_output
    **kwargs).stdout
  File "/usr/lib/python3.6/subprocess.py", line 438, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['shasum', '-a', '256', '/tmp/duckietown/DT18/evaluator/executions/aido3-LF-sim-validation/submission4979/step1-simulation-ip-172-31-35-218-10490-job27747/challenge-evaluation-output/episodes/ETHZ_autolab_technical_track-1-0/camera.mp4']' returned non-zero exit status 1.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
27746step1-simulationsuccessyes0:07:05
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible