Duckietown Challenges | Home | Challenges | Submissions |
(No description.)
These are the metrics defined:
driven_lanedir_consec
This is the median distance traveled, along a lane. (That is, going in circles will not make this metric increase.)
This is discretized to tiles.
survival_time
This is the median survival time. The simulation is terminated when the car goes outside of the road or it crashes with an obstacle.
deviation-center-line
This is the median lateral deviation from the center line.
in-drivable-lane
This is the median of the time spent outside of the drivable zones. For example this penalizes driving in the wrong lane.
Depends on successful evaluation on LFVI 🚗🚗🚦 - Lane following + Vehicles + Intersections (simulation 👾, testing 🥇)
The submission must first pass the testing.
The sum of the following tests should be at least 2.0.
Test on absolute scores:
good_enough
(1.0 points) driven_lanedir_consec_median
.Test on relative performance:
better-than-bea-straight
(1.0 points) straight
.Depends on successful evaluation on LFV 🚗🚗 - Lane following + Vehicles (robotarium 🏎, validation 🏋)
The submission must first pass the LF real.
The sum of the following tests should be at least 2.0.
Test on absolute scores:
good_enough
(1.0 points) driven_lanedir_consec
.Test on relative performance:
better-than-bea-straight
(1.0 points) straight
.At the beginning execute step eval0
.
If step eval0
has result success, then execute step eval0-visualize
.
If step eval0
has result failed, then declare the submission FAILED
.
If step eval0
has result error, then declare the submission ERROR
.
If step eval0
has result success, then execute step eval0-videos
.
If step eval1
has result success, then execute step eval1-videos
.
If step eval2
has result success, then execute step eval2-videos
.
If step eval0
has result success, then execute step eval1
.
If step eval0-visualize
has result failed, then declare the submission FAILED
.
If step eval0-visualize
has result error, then declare the submission ERROR
.
If step eval1
has result success, then execute step eval1-visualize
.
If step eval1
has result failed, then declare the submission FAILED
.
If step eval1
has result error, then declare the submission ERROR
.
If step eval1
has result success, then execute step eval2
.
If step eval1-visualize
has result failed, then declare the submission FAILED
.
If step eval1-visualize
has result error, then declare the submission ERROR
.
If step eval2
has result success, then execute step eval2-visualize
.
If step eval2
has result failed, then declare the submission FAILED
.
If step eval2
has result error, then declare the submission ERROR
.
If step eval2-visualize
has result success, then declare the submission SUCCESS
.
If step eval2-visualize
has result failed, then declare the submission FAILED
.
If step eval2-visualize
has result error, then declare the submission ERROR
.
eval0
Timeout 18000.0
Evaluation in the robotarium.
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido3-off-lfvi-real-validation-eval0-evaluator:2020_02_28_11_20_14@sha256:14937e5e9a768219ba7097107df82d268b8c35a5f7fe87dc71269d4d164f6e9e environment: {} ports: - 8005:8005
The text SUBMISSION_CONTAINER
will be replaced with the user containter.
# Duckiebots | 2 |
AIDO 2 Map LFVI public | 1 |
eval1
Timeout 18000.0
Evaluation in the robotarium.
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido3-off-lfvi-real-validation-eval1-evaluator:2020_02_28_11_20_25@sha256:14937e5e9a768219ba7097107df82d268b8c35a5f7fe87dc71269d4d164f6e9e environment: {} ports: - 8005:8005
The text SUBMISSION_CONTAINER
will be replaced with the user containter.
# Duckiebots | 2 |
AIDO 2 Map LFVI public | 1 |
eval2
Timeout 18000.0
Evaluation in the robotarium.
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido3-off-lfvi-real-validation-eval2-evaluator:2020_02_28_11_20_38@sha256:14937e5e9a768219ba7097107df82d268b8c35a5f7fe87dc71269d4d164f6e9e environment: {} ports: - 8005:8005
The text SUBMISSION_CONTAINER
will be replaced with the user containter.
# Duckiebots | 2 |
AIDO 2 Map LFVI public | 1 |
eval0-videos
Timeout 10800.0
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido3-off-lfvi-real-validation-eval0-videos-evaluator:2020_02_28_11_21_22@sha256:3b828381f8de914eedb4ad4431e7c50cd3aa4a19be6498f6abb11dd4d9fb8c60 environment: WORKER_I: '0' WORKER_N: '1' INPUT_DIR: /challenges/previous-steps/eval0/logs_raw OUTPUT_DIR: /challenges/challenge-evaluation-output DEBUG_OVERLAY: '1' BAG_NAME_FILTER: autobot,watchtower OUTPUT_FRAMERATE: '7'
The text SUBMISSION_CONTAINER
will be replaced with the user containter.
Cloud simulations | 1 |
eval1-videos
Timeout 10800.0
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido3-off-lfvi-real-validation-eval1-videos-evaluator:2020_02_28_11_21_40@sha256:3b828381f8de914eedb4ad4431e7c50cd3aa4a19be6498f6abb11dd4d9fb8c60 environment: WORKER_I: '0' WORKER_N: '1' INPUT_DIR: /challenges/previous-steps/eval1/logs_raw OUTPUT_DIR: /challenges/challenge-evaluation-output DEBUG_OVERLAY: '1' BAG_NAME_FILTER: autobot,watchtower OUTPUT_FRAMERATE: '7'
The text SUBMISSION_CONTAINER
will be replaced with the user containter.
Cloud simulations | 1 |
eval2-videos
Timeout 10800.0
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido3-off-lfvi-real-validation-eval2-videos-evaluator:2020_02_28_11_21_56@sha256:3b828381f8de914eedb4ad4431e7c50cd3aa4a19be6498f6abb11dd4d9fb8c60 environment: WORKER_I: '0' WORKER_N: '1' INPUT_DIR: /challenges/previous-steps/eval2/logs_raw OUTPUT_DIR: /challenges/challenge-evaluation-output DEBUG_OVERLAY: '1' BAG_NAME_FILTER: autobot,watchtower OUTPUT_FRAMERATE: '7'
The text SUBMISSION_CONTAINER
will be replaced with the user containter.
Cloud simulations | 1 |
eval0-visualize
Timeout 1080.0
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido3-off-lfvi-real-validation-eval0-visualize-evaluator:2020_02_28_11_20_50@sha256:d95babe778e197cc8f534d425915a334466289b2238c4e27bcde97c86a4a0674 environment: STEP_NAME: eval0
The text SUBMISSION_CONTAINER
will be replaced with the user containter.
Cloud simulations | 1 |
eval1-visualize
Timeout 1080.0
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido3-off-lfvi-real-validation-eval1-visualize-evaluator:2020_02_28_11_21_00@sha256:d95babe778e197cc8f534d425915a334466289b2238c4e27bcde97c86a4a0674 environment: STEP_NAME: eval1
The text SUBMISSION_CONTAINER
will be replaced with the user containter.
Cloud simulations | 1 |
eval2-visualize
Timeout 1080.0
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido3-off-lfvi-real-validation-eval2-visualize-evaluator:2020_02_28_11_21_11@sha256:d95babe778e197cc8f534d425915a334466289b2238c4e27bcde97c86a4a0674 environment: STEP_NAME: eval2
The text SUBMISSION_CONTAINER
will be replaced with the user containter.
Cloud simulations | 1 |