OpenCode inside Docker: How to safely run AI in your local terminal?
Table of Contents
Introduction#
I’ve recently been looking into how to run AI tools safely within our local environments. Tools like OpenCode are great, but they give algorithms a lot of freedom to interact with our files. I usually follow the principle of limited trust, which led me to try running OpenCode inside a container. This way, it only has access to the specific files and directories it actually needs.
Why isolation matters#
When we allow AI models to generate and run code, we are essentially giving them partial control over our terminal. Even if we trust the provider, a simple bug in a generated script could cause a mess in our file system. The solution? Containerization.
By using a container, OpenCode only sees what we choose to share. Once the job is done, the container disappears, leaving no clutter behind.
Building the image: A look at the Dockerfile#
I wanted the image to be as lightweight and secure as possible. I chose Alpine Linux as the base. Here is how I structured the environment:
FROM alpine:3.19
ARG UID=1000
ARG GID=1000
RUN apk add --no-cache curl ca-certificates bash libstdc++ libgcc \
&& curl -fsSL https://opencode.ai/install | bash \
&& ls -R /root/ \
&& mv /root/.opencode/bin/opencode /usr/local/bin/opencode || echo "File not found!" \
&& apk del curl
RUN addgroup -g $GID coder 2>/dev/null; \
GROUP_NAME=$(getent group $GID | cut -d: -f1); \
adduser -D -s /bin/sh -u $UID -G "$GROUP_NAME" coder \
&& mkdir -p /home/coder/.config/opencode \
&& chown -R coder:"$GROUP_NAME" /home/coder
USER coder
WORKDIR /workspace
CMD ["opencode"]
I started with Alpine because it keeps the image in the “featherweight” category—it only weighs a few megabytes. Next, I installed the tool using curl. To keep things clean, I removed curl immediately after moving the binary to the system folder. This minimizes the number of unnecessary tools in the image.
By using the UID and GID arguments, we map your host identity directly into the container. I used a small trick here: if a group with a specific ID already exists in Alpine (which often happens on macOS), the script won’t throw an error. Instead, it dynamically gets the group name and assigns the coder user to it.
Thanks to this, OpenCode runs with your permissions. Any files created by the AI belong to you, not to root. This means you can freely edit them without needing sudo.
When building the image, we need to pass the local parameters using the –build-arg flags:
docker build --build-arg UID=$(id -u) --build-arg GID=$(id -g) --no-cache -t safe-opencode .
Configuration for maximum security#
Docker isolation is just the first layer. The second layer is the internal configuration of OpenCode itself. I noticed that the default settings can be a bit too permissive. I recommend forcing a confirmation for every critical action.
In the ~/.config/opencode/opencode.json file, I suggest setting these flags:
{
"$schema": "https://opencode.ai/config.json",
"permission": {
"edit": "ask",
"bash": "ask"
}
}
Now, if the AI wants to edit a file or run a bash script, it will ask for your permission first. This simple barrier gives you a moment to reflect on what is actually happening.
An alias for convenience#
To make this feel as natural as a standard command, you can add an alias to your ~/.bashrc or ~/.zshrc file:
alias safe-opencode='docker run --rm -it \
-v "$HOME/.config/opencode/opencode.json:/home/coder/.config/opencode/opencode.json:ro" \
-v "$(pwd):/workspace:rw" \
safe-opencode'
After reloading your terminal, you can just type safe-opencode in any project folder. The tool will start in isolation, accessing only the files in that specific directory.
Notice that I mounted the config file as read-only (:ro). This means OpenCode can use our settings but cannot change them without our knowledge.
Summary#
Setting up this isolation layer only took a few minutes, but it provides a lot of peace of mind when experimenting with new models. We end up with a lightweight, portable, and—most importantly—secure work environment.