Duckietown Challenges Home Challenges Submissions

Submission 6824

Submission6824
Competingyes
Challengeaido5-LF-sim-validation
UserAnthony Courchesne 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58591
Next
User labelbaseline-duckietown
Admin priority50
Blessingn/a
User priority50

58591

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58591LFv-simsuccessyes0:33:52
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median7.2561674910046206
survival_time_median59.99999999999873
deviation-center-line_median4.447810853780256
in-drivable-lane_median3.549999999999927


other stats
agent_compute-ego0_max0.0122059499294335
agent_compute-ego0_mean0.011825990865470766
agent_compute-ego0_median0.011755063869276211
agent_compute-ego0_min0.011587885793897136
complete-iteration_max0.19594375041005613
complete-iteration_mean0.16987831512160542
complete-iteration_median0.16563907283033358
complete-iteration_min0.1522913644156984
deviation-center-line_max5.732917559456967
deviation-center-line_mean4.552426192411831
deviation-center-line_min3.5811655026298443
deviation-heading_max10.69052188487686
deviation-heading_mean8.345881528259296
deviation-heading_median8.589949300649815
deviation-heading_min5.513105626860693
driven_any_max7.92108871857497
driven_any_mean7.919413090123248
driven_any_median7.921018083041957
driven_any_min7.914527475834106
driven_lanedir_consec_max7.429352329136817
driven_lanedir_consec_mean6.534745105040045
driven_lanedir_consec_min4.197293109014121
driven_lanedir_max7.429352329136817
driven_lanedir_mean7.014488019015266
driven_lanedir_median7.2561674910046206
driven_lanedir_min6.1162647649150035
get_duckie_state_max1.1958647131622086e-06
get_duckie_state_mean1.1135299041011153e-06
get_duckie_state_median1.1350193388952403e-06
get_duckie_state_min9.88216225451772e-07
get_robot_state_max0.0036013983965515594
get_robot_state_mean0.0035511875629028015
get_robot_state_median0.0035490059634231707
get_robot_state_min0.003505339928213306
get_state_dump_max0.004524639107404799
get_state_dump_mean0.0043963813761886605
get_state_dump_median0.004407593749345689
get_state_dump_min0.004245698898658466
get_ui_image_max0.0358917572218413
get_ui_image_mean0.030331856454044057
get_ui_image_median0.029832512413234535
get_ui_image_min0.02577064376786587
in-drivable-lane_max12.54999999999932
in-drivable-lane_mean5.399999999999788
in-drivable-lane_min1.949999999999978
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 7.914527475834106, "get_ui_image": 0.02777893636546266, "step_physics": 0.08953279738223721, "survival_time": 59.99999999999873, "driven_lanedir": 6.1162647649150035, "get_state_dump": 0.004245698898658466, "get_robot_state": 0.003552205953669489, "sim_render-ego0": 0.0035929417828536848, "get_duckie_state": 9.88216225451772e-07, "in-drivable-lane": 12.54999999999932, "deviation-heading": 5.513105626860693, "agent_compute-ego0": 0.011653805454009578, "complete-iteration": 0.15285022570429793, "set_robot_commands": 0.002087161503266931, "deviation-center-line": 3.9818036788864903, "driven_lanedir_consec": 4.197293109014121, "sim_compute_sim_state": 0.008488407539983872, "sim_compute_performance-ego0": 0.001846801827690385}, "LF-norm-zigzag-000-ego0": {"driven_any": 7.921049832046091, "get_ui_image": 0.0358917572218413, "step_physics": 0.11975207555105448, "survival_time": 59.99999999999873, "driven_lanedir": 7.429352329136817, "get_state_dump": 0.004502667674017786, "get_robot_state": 0.003545805973176853, "sim_render-ego0": 0.0036469341614760527, "get_duckie_state": 1.1958647131622086e-06, "in-drivable-lane": 1.949999999999978, "deviation-heading": 10.69052188487686, "agent_compute-ego0": 0.0122059499294335, "complete-iteration": 0.19594375041005613, "set_robot_commands": 0.0021052084596429044, "deviation-center-line": 4.913818028674021, "driven_lanedir_consec": 7.429352329136817, "sim_compute_sim_state": 0.012338063599763563, "sim_compute_performance-ego0": 0.0018784968084737128}, "LF-norm-techtrack-000-ego0": {"driven_any": 7.92108871857497, "get_ui_image": 0.031886088461006414, "step_physics": 0.10635306436155956, "survival_time": 59.99999999999873, "driven_lanedir": 7.116221861008391, "get_state_dump": 0.0043125198246735915, "get_robot_state": 0.003505339928213306, "sim_render-ego0": 0.0035827149955755862, "get_duckie_state": 1.1047455393007455e-06, "in-drivable-lane": 4.3499999999998735, "deviation-heading": 9.689191254611654, "agent_compute-ego0": 0.01185632228454285, "complete-iteration": 0.17842791995636925, "set_robot_commands": 0.002062793576052346, "deviation-center-line": 3.5811655026298443, "driven_lanedir_consec": 7.116221861008391, "sim_compute_sim_state": 0.01295200851338789, "sim_compute_performance-ego0": 0.001844215154846344}, "LF-norm-small_loop-000-ego0": {"driven_any": 7.920986334037825, "get_ui_image": 0.02577064376786587, "step_physics": 0.09300821587803164, "survival_time": 59.99999999999873, "driven_lanedir": 7.39611312100085, "get_state_dump": 0.004524639107404799, "get_robot_state": 0.0036013983965515594, "sim_render-ego0": 0.0036355980627741247, "get_duckie_state": 1.1652931384897352e-06, "in-drivable-lane": 2.7499999999999813, "deviation-heading": 7.490707346687977, "agent_compute-ego0": 0.011587885793897136, "complete-iteration": 0.1522913644156984, "set_robot_commands": 0.002156275892932647, "deviation-center-line": 5.732917559456967, "driven_lanedir_consec": 7.39611312100085, "sim_compute_sim_state": 0.006021797011833604, "sim_compute_performance-ego0": 0.0019071451531758811}}
set_robot_commands_max0.002156275892932647
set_robot_commands_mean0.002102859857973707
set_robot_commands_median0.0020961849814549174
set_robot_commands_min0.002062793576052346
sim_compute_performance-ego0_max0.0019071451531758811
sim_compute_performance-ego0_mean0.0018691647360465809
sim_compute_performance-ego0_median0.0018626493180820488
sim_compute_performance-ego0_min0.001844215154846344
sim_compute_sim_state_max0.01295200851338789
sim_compute_sim_state_mean0.009950069166242232
sim_compute_sim_state_median0.010413235569873718
sim_compute_sim_state_min0.006021797011833604
sim_render-ego0_max0.0036469341614760527
sim_render-ego0_mean0.003614547250669862
sim_render-ego0_median0.003614269922813905
sim_render-ego0_min0.0035827149955755862
simulation-passed1
step_physics_max0.11975207555105448
step_physics_mean0.10216153829322072
step_physics_median0.0996806401197956
step_physics_min0.08953279738223721
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
58590LFv-simsuccessyes0:28:54
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58589LFv-simsuccessyes0:37:00
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52540LFv-simerrorno0:09:40
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139619253855616
- M:video_aido:cmdline(in:/;out:/) 139619565910432
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52521LFv-simerrorno0:11:07
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140661028392672
- M:video_aido:cmdline(in:/;out:/) 140661028391664
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41827LFv-simsuccessno0:08:37
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38403LFv-simerrorno0:01:27
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.49-py3.8.egg/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/media/sdb/duckietown-tmp-evals/production/aido5-LF-sim-validation/submission6824/LFv-sim-reg03-34ec761a5593-1-job38403-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36469LFv-simsuccessno0:10:09
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36468LFv-simsuccessno0:10:17
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35887LFv-simsuccessno0:01:11
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35884LFv-simsuccessno0:01:04
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35468LFv-simerrorno0:22:02
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6824/LFv-sim-reg03-0c28c9d61367-1-job35468:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6824/LFv-sim-reg03-0c28c9d61367-1-job35468/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6824/LFv-sim-reg03-0c28c9d61367-1-job35468/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6824/LFv-sim-reg03-0c28c9d61367-1-job35468/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6824/LFv-sim-reg03-0c28c9d61367-1-job35468/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6824/LFv-sim-reg03-0c28c9d61367-1-job35468/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35154LFv-simsuccessno0:24:16
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34429LFv-simsuccessno0:27:05
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34258LFv-simabortedno0:24:19
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6824/LFv-sim-reg04-f0a5d4b0baf5-1-job34258:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6824/LFv-sim-reg04-f0a5d4b0baf5-1-job34258/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
33912LFv-simhost-errorno0:00:00
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.29-py3.8.egg/duckietown_challenges_runner/runner.py", line 628, in get_cr
    os.makedirs(wd)
  File "/usr/lib/python3.8/os.py", line 223, in makedirs
    mkdir(name, mode)
OSError: [Errno 28] No space left on device: '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6824/LFv-sim-reg03-c2bc3037870e-1-job33912'
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
33907LFv-simhost-errorno0:00:00
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.29-py3.8.egg/duckietown_challenges_runner/runner.py", line 628, in get_cr
    os.makedirs(wd)
  File "/usr/lib/python3.8/os.py", line 223, in makedirs
    mkdir(name, mode)
OSError: [Errno 28] No space left on device: '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6824/LFv-sim-reg01-53440c9394b5-1-job33907'
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
33900LFv-simhost-errorno0:00:00
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.29-py3.8.egg/duckietown_challenges_runner/runner.py", line 628, in get_cr
    os.makedirs(wd)
  File "/usr/lib/python3.8/os.py", line 223, in makedirs
    mkdir(name, mode)
OSError: [Errno 28] No space left on device: '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6824/LFv-sim-reg01-53440c9394b5-1-job33900'
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
33896LFv-simhost-errorno0:00:00
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.29-py3.8.egg/duckietown_challenges_runner/runner.py", line 628, in get_cr
    os.makedirs(wd)
  File "/usr/lib/python3.8/os.py", line 223, in makedirs
    mkdir(name, mode)
OSError: [Errno 28] No space left on device: '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6824/LFv-sim-reg01-53440c9394b5-1-job33896'
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
33891LFv-simhost-errorno0:00:00
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.29-py3.8.egg/duckietown_challenges_runner/runner.py", line 628, in get_cr
    os.makedirs(wd)
  File "/usr/lib/python3.8/os.py", line 223, in makedirs
    mkdir(name, mode)
OSError: [Errno 28] No space left on device: '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6824/LFv-sim-reg07-c4e193407567-1-job33891'
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
33886LFv-simhost-errorno0:00:00
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.29-py3.8.egg/duckietown_challenges_runner/runner.py", line 628, in get_cr
    os.makedirs(wd)
  File "/usr/lib/python3.8/os.py", line 223, in makedirs
    mkdir(name, mode)
OSError: [Errno 28] No space left on device: '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6824/LFv-sim-reg04-bf35e9d68df4-1-job33886'
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
33876LFv-simhost-errorno0:00:00
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.29-py3.8.egg/duckietown_challenges_runner/runner.py", line 628, in get_cr
    os.makedirs(wd)
  File "/usr/lib/python3.8/os.py", line 223, in makedirs
    mkdir(name, mode)
OSError: [Errno 28] No space left on device: '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6824/LFv-sim-reg11-951de1eeccca-1-job33876'
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
33873LFv-simhost-errorno0:00:00
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.29-py3.8.egg/duckietown_challenges_runner/runner.py", line 628, in get_cr
    os.makedirs(wd)
  File "/usr/lib/python3.8/os.py", line 223, in makedirs
    mkdir(name, mode)
OSError: [Errno 28] No space left on device: '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6824/LFv-sim-reg05-5ca0d35e6d82-1-job33873'
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
33397LFv-simsuccessno0:14:11
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
33396LFv-simsuccessno0:16:57
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible