Creating a portable nvim container
Portable Neovim and Docker shenanigans
How would you like to port your Neovim between different platforms and machines using a single token and command while knowing exactly how it will behave?
Most of my Docker containers have been running services using compose instead of being temporary execution environments.
Trying out Helix in an earlier article inspired me to containerize my Neovim setup. During this process, I learned a lot and had some aha moments.
Here I’ll go through how to utilize docker for convenience and security. My examples use fish, because that is my preferred shell, but the examples are easily transferrable to bash.
Creating a portable nvim container
The goal was to create a standalone docker container, which I can then run in offline mode if necessary.
Instead of installing Neovim and letting it setup the dependencies,
just pull an image and have everything working and ready. Controlling the versioning means that I’ll always
have the exact same environment. I can run it confidently with --network none option because
all of the dependencies have been built into the container, thus eliminating most of the security concerns too.
This meant that I had to bake in some executables and include my plugins as state. My current build uses an AppImage for the Neovim executable, but I’m planning to either build or download a correct binary based on the target platform.
Here’s the base of the Dockerfile:
FROM debian:stable-slim
# Install tools that I need with Neovim
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive \
apt-get install -y --no-install-recommends \
git \
ripgrep \
jq && \
apt-get clean && rm -rf /var/lib/apt/lists/*
WORKDIR /opt/portable-nvim
COPY . .
# Make my nvim and formatter executable
RUN chmod +x bin/AppRun bin/usr/bin/nvim && \
chmod +x bin/dprint && \
find bin -type f -executable -exec strip --strip-all {} + || true
# Set my env variable, which handles the custom config point for Neovim
ENV PORTABLE_NVIM_HOME=/opt/portable-nvim
ARG USER_UID=1000
ARG USER_GID=1000
RUN groupadd --gid ${USER_GID} appgroup && \
useradd --uid ${USER_UID} --gid ${USER_GID} --create-home --shell /bin/bash appuser
RUN mkdir -p /.cache /project /.local && \
chown -R appuser:appgroup $PORTABLE_NVIM_HOME /.cache /project /.local
USER 1000
ENTRYPOINT ["./nvim"]
The entrypoint here is a script (that is confusingly named nvim).
This is also the script that I use to run Neovim outside of Docker on my build machine.
Overriding the system XDG directories can cause issues, mine mostly manifest as missing configs when using terminal inside Neovim.
#!/usr/bin/env bash
if [[ -z "${PORTABLE_NVIM_HOME:-}" ]]; then
echo 'PORTABLE_NVIM_HOME environment variable is not set'
exit 1
fi
DIR="${PORTABLE_NVIM_HOME}"
# Ensure binaries are executable
chmod +x "$DIR/bin/AppRun" "$DIR/bin/usr/bin/nvim"
XDG_CONFIG_HOME="$DIR/config" XDG_DATA_HOME="$DIR/site" \
exec "$DIR/bin/usr/bin/nvim" "$@"
Building and distributing the image
I build the image manually after making changes to it and push it to the GitHub registry. This makes it easy to pull the image by just using a personal access token with read permissions.
To make it extra useful, the image can be built for multiple architectures using buildx, which is then automatically resolved while pulling the image.
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t porta-nvim:latest \
.
Configuring the runtime
I included a few themes for the image and implemented a docker clipboard integration for WSL2 when I need to use Windows.
I decided to control these using env variables on my Neovim config. Here’s how I handle clipboard and theme variations to suit the situation.
docker run --rm --network none -it \
-e COLOR_MODE=LIGHT \
-e CLIP=DOCKER \
porta-nvim:latest
local mode = os.getenv 'COLOR_MODE'
if mode == 'LIGHT' then
return {
{
'rose-pine/neovim',
name = 'rose-pine',
priority = 1000,
config = function()
vim.cmd 'colorscheme rose-pine-dawn'
end,
},
}
end
-- When running through docker, use a shared file to handle possible clipboard integration
if os.getenv 'CLIP' == 'DOCKER' then
vim.g.clipboard = {
name = 'docker-file-sync',
copy = {
['+'] = { 'tee', '/project/.docker_clipboard' },
['*'] = { 'tee', '/project/.docker_clipboard' },
},
paste = {
['+'] = { 'cat', '/project/.docker_clipboard' },
['*'] = { 'cat', '/project/.docker_clipboard' },
},
cache_enabled = 1,
}
end
WSL2 Docker clipboard integration
I need to be able to copy things from my Neovim even if it is running on WSL2. The easy way to do this is by disabling mouse integration and using drag selection to copy things.
I wanted more, but Docker doesn’t really support copying things to the Windows clipboard through WSL2.
I came up with a solution to create a general clipboard file ~/.docker_clipboard which would work
as my transport layer between WSL2 and Windows.
This solution’s limitation is that WSL2 cannot create hard links to the Windows filesystem so everything needs to stay on the WSL2 instance.
When I copy to the + layer in Neovim it writes the result to that file.
Through the hard copy I can then have a systemd service watching the file for changes,
which then pipes it to the Windows clipboard.
I purposefully left it as a copy only integration to avoid writing secrets to a plain text file
and it is easy enough to CTRL SHIFT v things from Windows clipboard.
# This is an internal function that my nvim aliases call to setup the clipboard integration
function _nvim_setup
set -l master_clip ~/.docker_clipboard
set -l project_clip (pwd)/.docker_clipboard
# Ensure the master clip exists and create the hard link
touch "$master_clip"
ln -f "$master_clip" "$project_clip"
end
The full function is shown below. To keep Git and me happy, I added .docker_clipboard to my global ~/.gitignore.
Persistent history
After a while I was getting frustrated that I wasn’t able to undo changes from a previous instance and didn’t have access to my command history; turns out, that is an easy problem to fix.
Neovim uses shada and undo directories to store state. By creating and volume sharing these directories we can have persistent state for each separate project. Alternatively, you could just share the actual shada and undo directories to have consistent state, but I like the idea of keeping these separate.
# This is an internal function that my nvim aliases call to setup the clipboard integration
function _nvim_setup
set -l master_clip ~/.docker_clipboard
set -l project_clip (pwd)/.docker_clipboard
# Ensure the master clip exists and create the hard link
touch "$master_clip"
ln -f "$master_clip" "$project_clip"
set -l nvim_state $argv[1]
mkdir -p "$nvim_state/shada"
mkdir -p "$nvim_state/undo"
end
function nvim
set -l nvim_state .nvim_state
_nvim_setup $nvim_state
docker run --rm --network none -it \
-v .:/project \
-v "$nvim_state:/home/appuser/.local/state/nvim" \
porta-nvim:latest
end
To keep Git and me happy, I added .nvim_state to my global ~/.gitignore.
Extending the base image
Taskwarrior
Repurposing a docker container is quite easy. If I want to use my editor in a certain context I can either extend the image by baking in more stuff (as below) or just use it as a base to build a completely new image.
I needed a new task manager at work because my markdown list was way too long to be useful anymore. I self-host Vikunja (shoutout), but wanted something bare-bones for work.
After shopping around for a while, I decided on taskwarrior-tui / taskwarrior. Having access to my editor for task management is really nice.
Like with Neovim, I downloaded the executables to my build machine and baked them into the image.
COPY bin/task /usr/local/bin/task
RUN chmod +x /usr/local/bin/task
COPY bin/taskwarrior-tui /usr/local/bin/taskwarrior-tui
RUN chmod +x /usr/local/bin/taskwarrior-tui
function tasks
docker run --rm --network none -it \
-v ./data:/project \
-e EDITOR='/opt/portable-nvim/nvim' \
-e TASKRC='/project/.taskrc' \
-e TASKDATA='/project/.task' \
--entrypoint "" \
porta-nvim:latest \
taskwarrior-tui
end
Custom LSP server
We use a custom LSP server at work and I decided to extend my Neovim image to support that. The idea sounded complex, but turns out that it is quite easy too.
The server is implemented in Java so I needed to inject Java runtime, some lua files and a
.vim file for syntax highlighting to the container.
I build the container on my own machine so I cannot include proprietary executables in it. I could have installed Java to the base image, but we already create a separate, stripped down Java runtime for the LSP server, so I decided to reuse that.
-- Mount this to the nvim config
local lspconfig = require("lspconfig")
local configs = require("lspconfig.configs")
vim.filetype.add({
extension = {
c = "custom",
},
})
if not configs.custom_ls then
configs.custom_ls = {
default_config = {
cmd = { "/opt/jre/bin/java", "-jar", "/opt/lsp/custom.jar" },
filetypes = { "custom" },
root_dir = function(_)
return vim.fn.getcwd()
end,
},
}
end
lspconfig.custom_ls.setup({})
function custom-lsp
set -l nvim_state .nvim_state
_nvim_setup $nvim_state
docker run --rm --network none -it \
-v /home/user/projects/ls/custom-jre-linux:/opt/jre \
-v /home/user/projects/ls/lspServer-1.0.jar:/opt/lsp/custom.jar \
-v /home/user/projects/ls/lsp-inject.lua:/opt/portable-nvim/config/nvim/after/plugin/custom-lsp.lua \
-v /home/user/projects/ls/my-lang.vim:/opt/portable-nvim/config/nvim/syntax/custom.vim \
-v .:/project \
porta-nvim:latest
end
Sandboxing and security
I don’t really trust AI tools with my computer so using containers is a natural way to make things safer.
Instead of giving them access to my actual environment and files I can give them a limited space or a complete sandbox to run in.
Stateless containers
To handle smaller, contained changes that I can easily verify; I’ve been testing stateless containers running Claude Code.
This enables me to “work” on multiple things at the same time without having to have any more clones of my projects.
The idea is to pass in my ssh key/deploy key/PAT, clone the repository, implement and push the changes. Clearly, an SSH key is not the best for security, but having one that is protected by a passphrase means that claude cannot push anything autonomously. I feel like this is a good tradeoff between convenience and security.
function ccl
if test (count $argv) -eq 0
echo "Usage: ccl git-url"
echo " git-url Full Git URL (e.g. git@github.com:org/repo.git)"
return 1
end
set repo $argv[1]
if not test -f "$HOME/.ssh/id_ed25519"
echo "ERROR: SSH key not found at $HOME/.ssh/id_ed25519"
return 1
end
docker run -it --rm \
-e GIT_REPO=$repo \
--mount type=bind,src="$HOME/.ssh/id_ed25519",dst=/run/secrets/ssh_key,readonly \
claude-code
end
1Password
The 1Password cli tool op is really cool and can be used to avoid storing
secrets to the current env, or in .env files.
The way I’ve been running my Claude container, is to use op inject
to pass an ephemeral .env file to docker.
function cc
# Check that I'm logged in before running the container
if not op whoami >/dev/null 2>&1
echo "You are not signed in to 1Password CLI" >&2
eval (op signin)
if not op whoami >/dev/null 2>&1
echo "Signing failed" >&2
exit 1
end
end
# Inject the evaluated env file to the container without actually creating a file
docker run -it --rm \
-v .:/project \
--env-file (op inject -i $HOME/.config/claude-code/.env | psub) \
--mount type=bind,src="$HOME/.ssh/id_ed25519",dst=/run/secrets/ssh_key,readonly \
claude-code
end
The actual env file looks like this:
# Git
GIT_USER_NAME=""
GIT_USER_EMAIL=""
# Claude through AWS
# op references use UUID instead of a text name
AWS_ACCESS_KEY_ID=op://my-vault/99999999999999999999999999/access-key
AWS_SECRET_ACCESS_KEY=op://my-vault/99999999999999999999999999/credential
Conclusion
All of this to create a reusable, comfy Neovim instance that I can use anywhere
and update with a single docker pull command.
For once I have something that reduces the amount of configuring and just lets me work.