... | ... | @@ -29,10 +29,10 @@ Yes - beta support of the NVIDIA Container Runtime is now available on Jetson pl |
|
|
No, we do not support macOS (regardless of the version), however you can use the native macOS Docker client to deploy your containers remotely (refer to the [dockerd documentation](https://docs.docker.com/engine/reference/commandline/dockerd/#description)).
|
|
|
|
|
|
#### Is Microsoft Windows supported?
|
|
|
No, we do not support Microsoft Windows (regardless of the version), however you can use the native Microsoft Windows Docker client to deploy your containers remotely (refer to the [dockerd documentation](https://docs.docker.com/engine/reference/commandline/dockerd/#description)).
|
|
|
No, we do not support Microsoft Windows (regardless of the version), however you can use the native Microsoft Windows Docker client to deploy your containers remotely (refer to the [dockerd documentation](https://docs.docker.com/engine/reference/commandline/dockerd/#description)). We also support running Linux containers in Microsoft Windows Subsystem for Linux (WSL 2). Visit the [user guide](https://docs.nvidia.com/cuda/wsl-user-guide/index.html) for getting started with WSL 2.
|
|
|
|
|
|
#### Do you support Microsoft native container technologies (e.g. Windows server, Hyper-v)?
|
|
|
No, we do not support native Microsoft container technologies.
|
|
|
#### Do you support Microsoft native container technologies (e.g. Windows Server, Hyper-v)?
|
|
|
No, we do not yet support native Microsoft container technologies.
|
|
|
|
|
|
#### Do you support Optimus (i.e. NVIDIA dGPU + Intel iGPU)?
|
|
|
Yes, from the CUDA perspective there is no difference as long as your dGPU is powered-on and you are following the official driver instructions.
|
... | ... | @@ -77,7 +77,7 @@ No, MPS is not supported at the moment. However we plan on supporting this featu |
|
|
No, running a X server inside the container is not supported at the moment and there is no plan to support it in the near future (see also [OpenGL support](#is-opengl-supported)).
|
|
|
|
|
|
#### I have multiple GPU devices, how can I isolate them between my containers?
|
|
|
GPU isolation is achieved through the CLI option `--gpus`. Devices can be referenced by index (following the PCI bus order) or by UUID.
|
|
|
GPU isolation is achieved through the CLI option `--gpus`. Devices can be referenced by index (following the PCI bus order) or by UUID. See the [user guide](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/user-guide.html) for more information on these options.
|
|
|
|
|
|
e.g:
|
|
|
```
|
... | ... | |