Duckietown Challenges Home Challenges Submissions

Submission 10374

Submission10374
Competingyes
Challengeaido5-LF-sanity-sim-validation
UserFernanda Custodio Pereira do Carmo 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
Jobssanity-check: 39942
Next
User labelexercise_ros_template
Admin priority60
Blessingn/a
User priority60

39942

Click the images to see detailed statistics about the episode.

udem1-sc0-0

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
39942sanity-checksuccessyes0:01:42
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.026185073169267437
survival_time_median0.49999999999999994
deviation-center-line_median0.007364193832515594
in-drivable-lane_median0.0


other stats
agent_compute-ego_max0.02925395965576172
agent_compute-ego_mean0.02925395965576172
agent_compute-ego_median0.02925395965576172
agent_compute-ego_min0.02925395965576172
complete-iteration_max0.40302876992659137
complete-iteration_mean0.40302876992659137
complete-iteration_median0.40302876992659137
complete-iteration_min0.40302876992659137
deviation-center-line_max0.007364193832515594
deviation-center-line_mean0.007364193832515594
deviation-center-line_min0.007364193832515594
deviation-heading_max0.05162881792046155
deviation-heading_mean0.05162881792046155
deviation-heading_median0.05162881792046155
deviation-heading_min0.05162881792046155
driven_any_max0.0263020170306902
driven_any_mean0.0263020170306902
driven_any_median0.0263020170306902
driven_any_min0.0263020170306902
driven_lanedir_consec_max0.026185073169267437
driven_lanedir_consec_mean0.026185073169267437
driven_lanedir_consec_min0.026185073169267437
driven_lanedir_max0.026185073169267437
driven_lanedir_mean0.026185073169267437
driven_lanedir_median0.026185073169267437
driven_lanedir_min0.026185073169267437
get_duckie_state_max0.005423740907148881
get_duckie_state_mean0.005423740907148881
get_duckie_state_median0.005423740907148881
get_duckie_state_min0.005423740907148881
get_robot_state_max0.020749395543878727
get_robot_state_mean0.020749395543878727
get_robot_state_median0.020749395543878727
get_robot_state_min0.020749395543878727
get_state_dump_max0.017246658151799984
get_state_dump_mean0.017246658151799984
get_state_dump_median0.017246658151799984
get_state_dump_min0.017246658151799984
get_ui_image_max0.07295487143776634
get_ui_image_mean0.07295487143776634
get_ui_image_median0.07295487143776634
get_ui_image_min0.07295487143776634
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"udem1-sc0-0-ego": {"driven_any": 0.0263020170306902, "get_ui_image": 0.07295487143776634, "step_physics": 0.20366291566328568, "survival_time": 0.49999999999999994, "driven_lanedir": 0.026185073169267437, "get_state_dump": 0.017246658151799984, "sim_render-ego": 0.009499419819224964, "get_robot_state": 0.020749395543878727, "get_duckie_state": 0.005423740907148881, "in-drivable-lane": 0.0, "agent_compute-ego": 0.02925395965576172, "deviation-heading": 0.05162881792046155, "complete-iteration": 0.40302876992659137, "set_robot_commands": 0.006312023509632458, "deviation-center-line": 0.007364193832515594, "driven_lanedir_consec": 0.026185073169267437, "sim_compute_sim_state": 0.032426617362282494, "sim_compute_performance-ego": 0.005323041569102894}}
set_robot_commands_max0.006312023509632458
set_robot_commands_mean0.006312023509632458
set_robot_commands_median0.006312023509632458
set_robot_commands_min0.006312023509632458
sim_compute_performance-ego_max0.005323041569102894
sim_compute_performance-ego_mean0.005323041569102894
sim_compute_performance-ego_median0.005323041569102894
sim_compute_performance-ego_min0.005323041569102894
sim_compute_sim_state_max0.032426617362282494
sim_compute_sim_state_mean0.032426617362282494
sim_compute_sim_state_median0.032426617362282494
sim_compute_sim_state_min0.032426617362282494
sim_render-ego_max0.009499419819224964
sim_render-ego_mean0.009499419819224964
sim_render-ego_median0.009499419819224964
sim_render-ego_min0.009499419819224964
simulation-passed1
step_physics_max0.20366291566328568
step_physics_mean0.20366291566328568
step_physics_median0.20366291566328568
step_physics_min0.20366291566328568
survival_time_max0.49999999999999994
survival_time_mean0.49999999999999994
survival_time_min0.49999999999999994
No reset possible
39158sanity-checksuccessno0:01:40
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
39156sanity-checkhost-errorno0:00:39
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 678, in scoring_context
    yield cie
  File "experiment_manager.py", line 683, in go
    wrap(cie)
  File "experiment_manager.py", line 668, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "experiment_manager.py", line 119, in main
    raise InvalidEnvironment(msg=msg, lf=list_all_files("/fifos"))
duckietown_challenges.exceptions.InvalidEnvironment: Path /fifos/runner does not exist
│ lf: [/fifos/experiment_manager]
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
39136sanity-checkhost-errorno0:00:40
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 678, in scoring_context
    yield cie
  File "experiment_manager.py", line 683, in go
    wrap(cie)
  File "experiment_manager.py", line 668, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "experiment_manager.py", line 119, in main
    raise InvalidEnvironment(msg=msg, lf=list_all_files("/fifos"))
duckietown_challenges.exceptions.InvalidEnvironment: Path /fifos/runner does not exist
│ lf: [/fifos/experiment_manager]
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
39135sanity-checkhost-errorno0:00:35
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 678, in scoring_context
    yield cie
  File "experiment_manager.py", line 683, in go
    wrap(cie)
  File "experiment_manager.py", line 668, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "experiment_manager.py", line 119, in main
    raise InvalidEnvironment(msg=msg, lf=list_all_files("/fifos"))
duckietown_challenges.exceptions.InvalidEnvironment: Path /fifos/runner does not exist
│ lf: [/fifos/experiment_manager]
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
39112sanity-checkhost-errorno0:00:50
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 678, in scoring_context
    yield cie
  File "experiment_manager.py", line 683, in go
    wrap(cie)
  File "experiment_manager.py", line 668, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "experiment_manager.py", line 119, in main
    raise InvalidEnvironment(msg=msg, lf=list_all_files("/fifos"))
duckietown_challenges.exceptions.InvalidEnvironment: Path /fifos/runner does not exist
│ lf: [/fifos/experiment_manager]
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
39104sanity-checksuccessno0:02:44
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36622sanity-checksuccessno0:01:49
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36618sanity-checksuccessno0:02:10
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible