AI Driving Olympics Home Challenges Submissions Jobs

Challenge "LFV 🚗🚗 - Lane following + Vehicles (robotarium 🏎, validation 🏋)"

Challenge description

(No description.)

Leaderboard

Submissions

Challenge logistics

Scoring

Scoring criteria

These are the metrics defined:

Traveled distance - driven_lanedir_consec

This is the median distance traveled, along a lane. (That is, going in circles will not make this metric increase.)

This is discretized to tiles.

Survival time - survival_time

This is the median survival time. The simulation is terminated when the car goes outside of the road or it crashes with an obstacle.

Lateral deviation - deviation-center-line

This is the median lateral deviation from the center line.

Major infractions - in-drivable-lane

This is the median of the time spent outside of the drivable zones. For example this penalizes driving in the wrong lane.

Dependencies

Dependencies

Depends on successful evaluation on LFV 🚗🚗 - Lane following + Vehicles (simulation 👾, testing 🥇)

The submission must first pass the testing.

The sum of the following tests should be at least 2.0.

Test on absolute scores:

good_enough(1.0 points)
Obtain at least 0.2 for score driven_lanedir_consec_median.

Test on relative performance:

better-than-bea-straight(1.0 points)
Do at least as good as a submission ofBea Baselines labeled straight.

Depends on successful evaluation on LF 🚗 - Lane following (robotarium 🏎, validation 🏋)

The submission must first pass the LF real.

The sum of the following tests should be at least 2.0.

Test on absolute scores:

good_enough(1.0 points)
Obtain at least 0.2 for score driven_lanedir_consec.

Test on relative performance:

better-than-bea-straight(1.0 points)
Do at least as good as a submission ofBea Baselines labeled straight.

Details

Technical details

Evaluation steps details

  • At the beginning execute step eval0.

  • If step eval0 finishes with status success, then execute step eval0-visualize.

  • If step eval0 finishes with status failed, then declare the submission FAILED.

  • If step eval0 finishes with status error, then declare the submission ERROR.

  • If step eval0 finishes with status success, then execute step eval0-videos.

  • If step eval1 finishes with status success, then execute step eval1-videos.

  • If step eval2 finishes with status success, then execute step eval2-videos.

  • If step eval0 finishes with status success, then execute step eval1.

  • If step eval0-visualize finishes with status failed, then declare the submission FAILED.

  • If step eval0-visualize finishes with status error, then declare the submission ERROR.

  • If step eval1 finishes with status success, then execute step eval1-visualize.

  • If step eval1 finishes with status failed, then declare the submission FAILED.

  • If step eval1 finishes with status error, then declare the submission ERROR.

  • If step eval1 finishes with status success, then execute step eval2.

  • If step eval1-visualize finishes with status failed, then declare the submission FAILED.

  • If step eval1-visualize finishes with status error, then declare the submission ERROR.

  • If step eval2 finishes with status success, then execute step eval2-visualize.

  • If step eval2 finishes with status failed, then declare the submission FAILED.

  • If step eval2 finishes with status error, then declare the submission ERROR.

  • If step eval2-visualize finishes with status success, then declare the submission SUCCESS.

  • If step eval2-visualize finishes with status failed, then declare the submission FAILED.

  • If step eval2-visualize finishes with status error, then declare the submission ERROR.

Evaluation step eval0

Timeout 18000.0

Evaluation in the robotarium.

This is the Docker Compose configuration skeleton:

version: '3'
services:
    evaluator:
        image: docker.io/andreacensi/aido3-lfv-real-validation-eval0-evaluator:2019_11_20_11_27_25@sha256:79a231d8a49b004aa662d8171269526f91ac7b139bbe98e51fbb03d285375941
        environment: {}
        ports:
        - 8005:8005

The text SUBMISSION_CONTAINER will be replaced with the user containter.

Resources required for evaluating this step

# Duckiebots2
AIDO 2 Map LFV public1

Evaluation step eval1

Timeout 18000.0

Evaluation in the robotarium.

This is the Docker Compose configuration skeleton:

version: '3'
services:
    evaluator:
        image: docker.io/andreacensi/aido3-lfv-real-validation-eval1-evaluator:2019_11_20_11_27_46@sha256:79a231d8a49b004aa662d8171269526f91ac7b139bbe98e51fbb03d285375941
        environment: {}
        ports:
        - 8005:8005

The text SUBMISSION_CONTAINER will be replaced with the user containter.

Resources required for evaluating this step

# Duckiebots2
AIDO 2 Map LFV public1

Evaluation step eval0-videos-autobots

Timeout 10800.0

This is the Docker Compose configuration skeleton:

version: '3'
services:
    evaluator:
        image: docker.io/andreacensi/aido3-lfv-real-validation-eval0-videos-autobots-evaluator:2019_11_05_13_58_14@sha256:8c0851ce38352634dec44994dd6845c92003cd328730159627adabad9744c2aa
        environment:
            WORKER_I: '0'
            WORKER_N: '1'
            INPUT_DIR: /challenges/previous-steps/eval0/challenge-evaluation-output/raw_logs/bags
            OUTPUT_DIR: /challenges/challenge-evaluation-output
            DEBUG_OVERLAY: '1'
            BAG_NAME_FILTER: autobot
            OUTPUT_FRAMERATE: '12'

The text SUBMISSION_CONTAINER will be replaced with the user containter.

Resources required for evaluating this step

Cloud simulations1
IPFS mountpoint /ipfs available1

Evaluation step eval0-videos-watchtowers

Timeout 10800.0

This is the Docker Compose configuration skeleton:

version: '3'
services:
    evaluator:
        image: docker.io/andreacensi/aido3-lfv-real-validation-eval0-videos-watchtowers-evaluator:2019_11_05_13_58_39@sha256:8c0851ce38352634dec44994dd6845c92003cd328730159627adabad9744c2aa
        environment:
            WORKER_I: '0'
            WORKER_N: '1'
            INPUT_DIR: /challenges/previous-steps/eval0/challenge-evaluation-output/raw_logs/bags
            OUTPUT_DIR: /challenges/challenge-evaluation-output
            DEBUG_OVERLAY: '0'
            BAG_NAME_FILTER: watchtower
            OUTPUT_FRAMERATE: '12'

The text SUBMISSION_CONTAINER will be replaced with the user containter.

Resources required for evaluating this step

Cloud simulations1
IPFS mountpoint /ipfs available1

Evaluation step eval1-videos-autobots

Timeout 10800.0

This is the Docker Compose configuration skeleton:

version: '3'
services:
    evaluator:
        image: docker.io/andreacensi/aido3-lfv-real-validation-eval1-videos-autobots-evaluator:2019_11_05_13_59_05@sha256:8c0851ce38352634dec44994dd6845c92003cd328730159627adabad9744c2aa
        environment:
            WORKER_I: '0'
            WORKER_N: '1'
            INPUT_DIR: /challenges/previous-steps/eval0/challenge-evaluation-output/raw_logs/bags
            OUTPUT_DIR: /challenges/challenge-evaluation-output
            DEBUG_OVERLAY: '1'
            BAG_NAME_FILTER: autobot
            OUTPUT_FRAMERATE: '12'

The text SUBMISSION_CONTAINER will be replaced with the user containter.

Resources required for evaluating this step

Cloud simulations1
IPFS mountpoint /ipfs available1

Evaluation step eval1-videos-watchtowers

Timeout 10800.0

This is the Docker Compose configuration skeleton:

version: '3'
services:
    evaluator:
        image: docker.io/andreacensi/aido3-lfv-real-validation-eval1-videos-watchtowers-evaluator:2019_11_05_13_59_30@sha256:8c0851ce38352634dec44994dd6845c92003cd328730159627adabad9744c2aa
        environment:
            WORKER_I: '0'
            WORKER_N: '1'
            INPUT_DIR: /challenges/previous-steps/eval0/challenge-evaluation-output/raw_logs/bags
            OUTPUT_DIR: /challenges/challenge-evaluation-output
            DEBUG_OVERLAY: '0'
            BAG_NAME_FILTER: watchtower
            OUTPUT_FRAMERATE: '12'

The text SUBMISSION_CONTAINER will be replaced with the user containter.

Resources required for evaluating this step

Cloud simulations1
IPFS mountpoint /ipfs available1

Evaluation step eval2

Timeout 18000.0

Evaluation in the robotarium.

This is the Docker Compose configuration skeleton:

version: '3'
services:
    evaluator:
        image: docker.io/andreacensi/aido3-lfv-real-validation-eval2-evaluator:2019_11_20_11_28_15@sha256:79a231d8a49b004aa662d8171269526f91ac7b139bbe98e51fbb03d285375941
        environment: {}
        ports:
        - 8005:8005

The text SUBMISSION_CONTAINER will be replaced with the user containter.

Resources required for evaluating this step

# Duckiebots2
AIDO 2 Map LFV public1

Evaluation step eval0-visualize

Timeout 1080.0

This is the Docker Compose configuration skeleton:

version: '3'
services:
    evaluator:
        image: docker.io/andreacensi/aido3-lfv-real-validation-eval0-visualize-evaluator:2019_11_25_12_02_05@sha256:e49cd60e2f408d05fdb908f545de9fb82bdce6544a45a68ab253427f7da3d7aa
        environment:
            STEP_NAME: eval0

The text SUBMISSION_CONTAINER will be replaced with the user containter.

Resources required for evaluating this step

Cloud simulations1

Evaluation step eval1-visualize

Timeout 1080.0

This is the Docker Compose configuration skeleton:

version: '3'
services:
    evaluator:
        image: docker.io/andreacensi/aido3-lfv-real-validation-eval1-visualize-evaluator:2019_11_25_12_02_29@sha256:e49cd60e2f408d05fdb908f545de9fb82bdce6544a45a68ab253427f7da3d7aa
        environment:
            STEP_NAME: eval1

The text SUBMISSION_CONTAINER will be replaced with the user containter.

Resources required for evaluating this step

Cloud simulations1

Evaluation step eval2-visualize

Timeout 1080.0

This is the Docker Compose configuration skeleton:

version: '3'
services:
    evaluator:
        image: docker.io/andreacensi/aido3-lfv-real-validation-eval2-visualize-evaluator:2019_11_25_12_02_53@sha256:e49cd60e2f408d05fdb908f545de9fb82bdce6544a45a68ab253427f7da3d7aa
        environment:
            STEP_NAME: eval2

The text SUBMISSION_CONTAINER will be replaced with the user containter.

Resources required for evaluating this step

Cloud simulations1

Evaluation step eval0-videos

Timeout 10800.0

This is the Docker Compose configuration skeleton:

version: '3'
services:
    evaluator:
        image: docker.io/andreacensi/aido3-lfv-real-validation-eval0-videos-evaluator:2019_12_02_14_17_54@sha256:c1050320725f39d1b03562b4fa2f32952e7ad68cc56a27c915d3e45121caa527
        environment:
            WORKER_I: '0'
            WORKER_N: '1'
            INPUT_DIR: /challenges/previous-steps/eval0/logs_raw
            OUTPUT_DIR: /challenges/challenge-evaluation-output
            DEBUG_OVERLAY: '1'
            BAG_NAME_FILTER: autobot,watchtower
            OUTPUT_FRAMERATE: '7'

The text SUBMISSION_CONTAINER will be replaced with the user containter.

Resources required for evaluating this step

Cloud simulations1

Evaluation step eval1-videos

Timeout 10800.0

This is the Docker Compose configuration skeleton:

version: '3'
services:
    evaluator:
        image: docker.io/andreacensi/aido3-lfv-real-validation-eval1-videos-evaluator:2019_12_02_14_17_59@sha256:c1050320725f39d1b03562b4fa2f32952e7ad68cc56a27c915d3e45121caa527
        environment:
            WORKER_I: '0'
            WORKER_N: '1'
            INPUT_DIR: /challenges/previous-steps/eval1/logs_raw
            OUTPUT_DIR: /challenges/challenge-evaluation-output
            DEBUG_OVERLAY: '1'
            BAG_NAME_FILTER: autobot,watchtower
            OUTPUT_FRAMERATE: '7'

The text SUBMISSION_CONTAINER will be replaced with the user containter.

Resources required for evaluating this step

Cloud simulations1

Evaluation step eval2-videos

Timeout 10800.0

This is the Docker Compose configuration skeleton:

version: '3'
services:
    evaluator:
        image: docker.io/andreacensi/aido3-lfv-real-validation-eval2-videos-evaluator:2019_12_02_14_18_04@sha256:c1050320725f39d1b03562b4fa2f32952e7ad68cc56a27c915d3e45121caa527
        environment:
            WORKER_I: '0'
            WORKER_N: '1'
            INPUT_DIR: /challenges/previous-steps/eval2/logs_raw
            OUTPUT_DIR: /challenges/challenge-evaluation-output
            DEBUG_OVERLAY: '1'
            BAG_NAME_FILTER: autobot,watchtower
            OUTPUT_FRAMERATE: '7'

The text SUBMISSION_CONTAINER will be replaced with the user containter.

Resources required for evaluating this step

Cloud simulations1