Duckietown Challenges Home Challenges Submissions

Submission 12236

Submission12236
Competinguser retired
Challengeaido5-LFV_multi-sim-validation
UserBea Baselines 🐤
Date submitted
Last status update-
Complete
Detailsstatus not computed yet
Sisters
Result?
Jobs
Next
User labelbaseline-behavior-cloning
Admin priority50
Blessingn/a
User priority50

Status not computed yet.

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
52727LFVmultibodyv-simhost-errorno0:01:11
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 248, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52726LFVmultibodyv-simhost-errorno0:01:07
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 248, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49415LFVmultibodyv-simhost-errorno0:04:07
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego1" aborted with the following error:

error in ego1 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 44, in init
              ||     self.model = FrankNet.build(200, 150)
              ||   File "/submission/frankModel.py", line 76, in build
              ||     linearVelocity = FrankNet.build_linear_branch(inputs)
              ||   File "/submission/frankModel.py", line 31, in build_linear_branch
              ||     x = Dense(1164, kernel_initializer='normal', activation='relu')(x)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 897, in __call__
              ||     self._maybe_build(inputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 2416, in _maybe_build
              ||     self.build(input_shapes)  # pylint:disable=not-callable
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/core.py", line 1158, in build
              ||     self.kernel = self.add_weight(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 560, in add_weight
              ||     variable = self._add_variable_with_custom_getter(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/tracking/base.py", line 738, in _add_variable_with_custom_getter
              ||     new_variable = getter(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 129, in make_variable
              ||     return tf_variables.VariableV1(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 259, in __call__
              ||     return cls._variable_v1_call(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 205, in _variable_v1_call
              ||     return previous_getter(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 198, in <lambda>
              ||     previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variable_scope.py", line 2584, in default_variable_creator
              ||     return resource_variable_ops.ResourceVariable(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 263, in __call__
              ||     return super(VariableMetaclass, cls).__call__(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1423, in __init__
              ||     self._init_from_args(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1567, in _init_from_args
              ||     initial_value() if init_from_fn else initial_value,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 121, in <lambda>
              ||     init_val = lambda: initializer(shape, dtype=dtype)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/init_ops_v2.py", line 381, in __call__
              ||     return self._random_generator.random_normal(shape, self.mean, self.stddev,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/init_ops_v2.py", line 1058, in random_normal
              ||     return op(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/random_ops.py", line 91, in random_normal
              ||     rnd = gen_random_ops.random_standard_normal(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_random_ops.py", line 641, in random_standard_normal
              ||     _ops.raise_from_not_ok_status(e, name)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 6653, in raise_from_not_ok_status
              ||     six.raise_from(core._status_to_exception(e.code, message), None)
              ||   File "<string>", line 3, in raise_from
              || tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[13824,1164] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:RandomStandardNormal]
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 329, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 44, in init
              || |     self.model = FrankNet.build(200, 150)
              || |   File "/submission/frankModel.py", line 76, in build
              || |     linearVelocity = FrankNet.build_linear_branch(inputs)
              || |   File "/submission/frankModel.py", line 31, in build_linear_branch
              || |     x = Dense(1164, kernel_initializer='normal', activation='relu')(x)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 897, in __call__
              || |     self._maybe_build(inputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 2416, in _maybe_build
              || |     self.build(input_shapes)  # pylint:disable=not-callable
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/core.py", line 1158, in build
              || |     self.kernel = self.add_weight(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 560, in add_weight
              || |     variable = self._add_variable_with_custom_getter(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/tracking/base.py", line 738, in _add_variable_with_custom_getter
              || |     new_variable = getter(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 129, in make_variable
              || |     return tf_variables.VariableV1(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 259, in __call__
              || |     return cls._variable_v1_call(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 205, in _variable_v1_call
              || |     return previous_getter(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 198, in <lambda>
              || |     previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variable_scope.py", line 2584, in default_variable_creator
              || |     return resource_variable_ops.ResourceVariable(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 263, in __call__
              || |     return super(VariableMetaclass, cls).__call__(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1423, in __init__
              || |     self._init_from_args(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1567, in _init_from_args
              || |     initial_value() if init_from_fn else initial_value,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 121, in <lambda>
              || |     init_val = lambda: initializer(shape, dtype=dtype)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/init_ops_v2.py", line 381, in __call__
              || |     return self._random_generator.random_normal(shape, self.mean, self.stddev,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/init_ops_v2.py", line 1058, in random_normal
              || |     return op(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/random_ops.py", line 91, in random_normal
              || |     rnd = gen_random_ops.random_standard_normal(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_random_ops.py", line 641, in random_standard_normal
              || |     _ops.raise_from_not_ok_status(e, name)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 6653, in raise_from_not_ok_status
              || |     six.raise_from(core._status_to_exception(e.code, message), None)
              || |   File "<string>", line 3, in raise_from
              || | tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[13824,1164] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:RandomStandardNormal]
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 248, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego1" aborted with the following error:

error in ego1 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 44, in init
              ||     self.model = FrankNet.build(200, 150)
              ||   File "/submission/frankModel.py", line 76, in build
              ||     linearVelocity = FrankNet.build_linear_branch(inputs)
              ||   File "/submission/frankModel.py", line 31, in build_linear_branch
              ||     x = Dense(1164, kernel_initializer='normal', activation='relu')(x)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 897, in __call__
              ||     self._maybe_build(inputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 2416, in _maybe_build
              ||     self.build(input_shapes)  # pylint:disable=not-callable
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/core.py", line 1158, in build
              ||     self.kernel = self.add_weight(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 560, in add_weight
              ||     variable = self._add_variable_with_custom_getter(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/tracking/base.py", line 738, in _add_variable_with_custom_getter
              ||     new_variable = getter(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 129, in make_variable
              ||     return tf_variables.VariableV1(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 259, in __call__
              ||     return cls._variable_v1_call(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 205, in _variable_v1_call
              ||     return previous_getter(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 198, in <lambda>
              ||     previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variable_scope.py", line 2584, in default_variable_creator
              ||     return resource_variable_ops.ResourceVariable(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 263, in __call__
              ||     return super(VariableMetaclass, cls).__call__(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1423, in __init__
              ||     self._init_from_args(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1567, in _init_from_args
              ||     initial_value() if init_from_fn else initial_value,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 121, in <lambda>
              ||     init_val = lambda: initializer(shape, dtype=dtype)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/init_ops_v2.py", line 381, in __call__
              ||     return self._random_generator.random_normal(shape, self.mean, self.stddev,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/init_ops_v2.py", line 1058, in random_normal
              ||     return op(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/random_ops.py", line 91, in random_normal
              ||     rnd = gen_random_ops.random_standard_normal(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_random_ops.py", line 641, in random_standard_normal
              ||     _ops.raise_from_not_ok_status(e, name)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 6653, in raise_from_not_ok_status
              ||     six.raise_from(core._status_to_exception(e.code, message), None)
              ||   File "<string>", line 3, in raise_from
              || tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[13824,1164] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:RandomStandardNormal]
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 329, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 44, in init
              || |     self.model = FrankNet.build(200, 150)
              || |   File "/submission/frankModel.py", line 76, in build
              || |     linearVelocity = FrankNet.build_linear_branch(inputs)
              || |   File "/submission/frankModel.py", line 31, in build_linear_branch
              || |     x = Dense(1164, kernel_initializer='normal', activation='relu')(x)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 897, in __call__
              || |     self._maybe_build(inputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 2416, in _maybe_build
              || |     self.build(input_shapes)  # pylint:disable=not-callable
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/core.py", line 1158, in build
              || |     self.kernel = self.add_weight(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 560, in add_weight
              || |     variable = self._add_variable_with_custom_getter(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/tracking/base.py", line 738, in _add_variable_with_custom_getter
              || |     new_variable = getter(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 129, in make_variable
              || |     return tf_variables.VariableV1(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 259, in __call__
              || |     return cls._variable_v1_call(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 205, in _variable_v1_call
              || |     return previous_getter(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 198, in <lambda>
              || |     previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variable_scope.py", line 2584, in default_variable_creator
              || |     return resource_variable_ops.ResourceVariable(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 263, in __call__
              || |     return super(VariableMetaclass, cls).__call__(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1423, in __init__
              || |     self._init_from_args(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1567, in _init_from_args
              || |     initial_value() if init_from_fn else initial_value,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 121, in <lambda>
              || |     init_val = lambda: initializer(shape, dtype=dtype)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/init_ops_v2.py", line 381, in __call__
              || |     return self._random_generator.random_normal(shape, self.mean, self.stddev,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/init_ops_v2.py", line 1058, in random_normal
              || |     return op(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/random_ops.py", line 91, in random_normal
              || |     rnd = gen_random_ops.random_standard_normal(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_random_ops.py", line 641, in random_standard_normal
              || |     _ops.raise_from_not_ok_status(e, name)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 6653, in raise_from_not_ok_status
              || |     six.raise_from(core._status_to_exception(e.code, message), None)
              || |   File "<string>", line 3, in raise_from
              || | tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[13824,1164] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:RandomStandardNormal]
              || |
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49328LFVmultibodyv-simhost-errorno0:00:57
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 248, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49324LFVmultibodyv-simhost-errorno0:01:13
The container "evalu [...]
The container "evaluator" exited with code 1.


Look at the logs for the container to know more about the error.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49253LFVmultibodyv-simhost-errorno0:00:54
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 248, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49250LFVmultibodyv-simhost-errorno0:01:31
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 248, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49244LFVmultibodyv-simhost-errorno0:01:05
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 248, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49234LFVmultibodyv-simhost-errorno0:02:38
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 248, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49227LFVmultibodyv-simhost-errorno0:01:01
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 248, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49191LFVmultibodyv-simhost-errorno0:01:33
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 248, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49177LFVmultibodyv-simhost-errorno0:01:10
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 248, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49168LFVmultibodyv-simhost-errorno0:02:18
The container "solut [...]
The container "solution-ego1" exited with code 139.


Error code 139 means GPU memory error.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49137LFVmultibodyv-simhost-errorno0:01:05
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 248, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49133LFVmultibodyv-simhost-errorno0:02:06
The container "solut [...]
The container "solution-ego1" exited with code 139.


Error code 139 means GPU memory error.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49089LFVmultibodyv-simhost-errorno0:00:57
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 248, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49086LFVmultibodyv-simhost-errorno0:01:03
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 248, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49082LFVmultibodyv-simhost-errorno0:01:04
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 248, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49076LFVmultibodyv-simhost-errorno0:01:46
The container "solut [...]
The container "solution-ego1" exited with code 139.


Error code 139 means GPU memory error.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49073LFVmultibodyv-simhost-errorno0:01:11
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 248, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49066LFVmultibodyv-simhost-errorno0:00:56
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 248, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49062LFVmultibodyv-simhost-errorno0:02:11
The container "solut [...]
The container "solution-ego1" exited with code 139.


Error code 139 means GPU memory error.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
49048LFVmultibodyv-simhost-errorno0:01:02
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 248, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 320, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 37, in init
              ||     self.check_tensorflow_gpu()
              ||   File "solution.py", line 53, in check_tensorflow_gpu
              ||     name = tf.test.gpu_device_name()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py", line 106, in gpu_device_name
              ||     for x in device_lib.list_local_devices():
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/device_lib.py", line 43, in list_local_devices
              ||     _convert(s) for s in _pywrap_device_lib.list_devices(serialized_config)
              || RuntimeError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible