Selectable Sandbox Backends: DinD and MicroVM
Status: Deferred - requires a dedicated design pass before implementation
Problem
Section titled “Problem”jackin currently has exactly one runtime model:
- build the agent image on the host Docker engine
- create a per-agent Docker network
- start a privileged
docker:dindsidecar - start the agent container on that network
- point the agent at the sidecar with
DOCKER_HOST=tcp://...:2375
That model is coherent, but it leaves a product gap against microVM-based tools such as Docker Sandboxes. Operators who want stronger local isolation cannot choose a hypervisor-backed runtime, and operators who are happy with Docker cannot explicitly select and manage the current mode as a first-class feature.
The requested feature is a single product-level capability:
- the operator should be able to choose how an agent is loaded into a workspace
- the two supported approaches should be
dindandmicrovm - the same agent/workspace concepts should continue to work in both modes
Why It Matters
Section titled “Why It Matters”dindis the shortest path and already matches the current architecturemicrovmis the path that narrows the gap with Docker Sandboxes- a user-visible mode switch makes the isolation tradeoff explicit instead of implicit
- the project can keep its current Docker-first ergonomics while adding a stronger boundary where the host supports it
Current State
Section titled “Current State”Today the runtime is tightly coupled to Docker and DinD:
src/runtime/image.rsbuilds the image;src/runtime/launch.rsstarts the network, startsdocker:dind, and launches the agent containersrc/docker.rsshells out to the Docker CLI for all lifecycle operationssrc/instance/naming.rsderives persisted state paths from Docker-style container namesdocker/construct/Dockerfileinstalls Docker CLI and Compose in the agent environmentdocker/runtime/entrypoint.shlaunches Claude inside a Docker-oriented runtime contract
Important current assumptions:
- the agent can talk to a Docker-compatible daemon from inside its sandbox
- workspace access is delivered through direct host bind mounts
- agent state persistence is separate from runtime filesystem persistence
- runtime attach/eject/list behavior is discovered from Docker container state
These assumptions are reasonable for dind, but they are not backend-neutral.
Add a first-class sandbox mode abstraction with these operator-visible outcomes:
- The operator can choose
dind,microvm, orauto. - Existing role repos remain usable without forcing every agent author to redesign their Dockerfile.
- Workspaces, mounts, last-used agent tracking, and persisted Claude/GitHub state continue to work in both modes.
- Unsupported hosts fail clearly or fall back intentionally rather than half-working.
Non-Goals
Section titled “Non-Goals”- Replacing Docker-based image builds in the first phase
- Designing a cloud sandbox product
- Guaranteeing identical low-level runtime behavior across all providers
- Claiming that a hardened container runtime is equivalent to a microVM
User Experience Requirements
Section titled “User Experience Requirements”The feature should be visible in three places:
Examples:
jackin load agent-smith --sandbox-mode dindjackin load agent-smith --sandbox-mode microvmjackin load agent-smith --sandbox-mode autoConfig
Section titled “Config”Suggested global shape:
[runtime]default_mode = "auto"microvm_provider = "auto"persist_engine_state = falseSuggested workspace override shape:
[workspaces.big-monorepo.runtime]mode = "microvm"Runtime Output
Section titled “Runtime Output”The launch summary should tell the operator which backend is being used, for example:
sandbox mode: dindsandbox mode: microvm (kata)sandbox mode: microvm (apple)
High-Level Design Options
Section titled “High-Level Design Options”Option 1: DinD Only, Hardened and Explicit
Section titled “Option 1: DinD Only, Hardened and Explicit”This makes the current architecture first-class without adding a second backend yet.
Pros:
- smallest implementation
- low migration risk
- immediately improves current security posture if TLS/rootless work is added
Cons:
- does not solve the Docker Sandboxes comparison gap
- still shares the host kernel
Option 2: Two User Modes, One Generic MicroVM Abstraction
Section titled “Option 2: Two User Modes, One Generic MicroVM Abstraction”Expose dind and microvm at the product level. Under microvm, pick an implementation per platform.
Recommended provider strategy:
- macOS with OrbStack installed: OrbStack isolated machine (v2.1.1+)
- macOS without OrbStack, or Linux with KVM:
libkrunvia thesmolvmwrapper - unsupported host: explicit fallback to
dindor hard failure
Pros:
- matches the user-facing requirement directly
- keeps the UX stable while allowing different providers underneath
- avoids leaking a specific provider name into the top-level product contract
Cons:
- larger design surface
- requires backend-neutral lifecycle management
Option 3: Provider-Specific User Modes
Section titled “Option 3: Provider-Specific User Modes”Expose dind, kata, apple, and later other providers.
Pros:
- transparent about implementation
Cons:
- leaks infrastructure choices into the operator UX
- makes cross-platform defaults and docs more complex
- encourages provider-specific branching too early
Recommendation
Section titled “Recommendation”Choose Option 2.
Make microvm the user-facing mode and treat the actual provider as an internal decision. This keeps the feature aligned with the isolation model the operator cares about while preserving room for different host-specific implementations.
Critical Compatibility Question: Will Existing Agent Dockerfiles Work?
Section titled “Critical Compatibility Question: Will Existing Agent Dockerfiles Work?”Mostly yes for the agent images themselves.
The current role repo contract only requires that the final stage use the construct image. The existing agent Dockerfiles are standard OCI-style environment definitions and should remain valid inputs for both dind and microvm modes.
The real compatibility issue is not the Dockerfiles. It is the runtime contract around them:
- current agents expect Docker CLI tooling in the sandbox
- current launch flow injects
DOCKER_HOST - current runtime assumes direct bind mounts and
docker attach
So the correct conclusion is:
- agent Dockerfiles are reusable
- the runtime backend is what must change
- the product should preserve a Docker-compatible inner engine where possible so current agents continue to function
Backend-Specific Design
Section titled “Backend-Specific Design”DinD Mode
Section titled “DinD Mode”dind mode should formalize and harden the current design.
Required improvements:
- move the current runtime into a named backend module
- stop using unauthenticated plain TCP DinD
- prefer TLS or a private socket transport
- add a backend-neutral instance registry instead of inferring all state from Docker names
- only persist last-used agent metadata on successful launch
Possible hardening layers to evaluate:
docker:dind-rootlesssysbox-runcon Linux hosts
sysbox is especially relevant as a Linux-only improvement path because it can support Docker-in-Docker without the usual privileged container model. It is not a microVM and should not be presented as one.
MicroVM Mode
Section titled “MicroVM Mode”microvm mode should provide a stronger isolation boundary while keeping the same high-level operator workflow.
Required properties:
- private engine inside the VM boundary
- reusable agent image or equivalent runnable artifact
- workspace access delivered into the VM
- persisted Claude/GitHub/plugin state mounted or synchronized into the VM
There are two realistic local providers:
MicroVM Implementation Primer
Section titled “MicroVM Implementation Primer”The most important implementation question is not “can jackin use a VM?” It is:
- what should run inside the VM
- what should stay on the host
- what parts of the current Docker contract need to survive inside the sandbox
For this project, the most practical mental model is:
- keep using Dockerfiles and OCI images as the packaging format
- change the isolation boundary from host containers to a VM
- provide a private Docker-compatible engine inside that VM when the agent needs Docker workflows
Build / Run Matrix
Section titled “Build / Run Matrix”The build path and the runtime path are separate decisions.
| Build path | Run path | Viable for jackin? | Notes |
|---|---|---|---|
| Host Docker | Host container | Yes | Current implementation |
| Host Docker | MicroVM | Yes | Best first prototype for microvm |
| VM-local Docker | MicroVM | Yes | Closer to Docker Sandboxes |
| Host Docker | Remote Linux microVM | Yes | Useful if local host cannot provide a microVM backend |
The strongest short-term recommendation is:
- keep host-side image builds in the first phase
- run the resulting agent image inside a microVM
- provide the private engine inside the VM boundary, not on the host
That gives a meaningful security improvement without redesigning the whole build pipeline on day one.
What Should Run Inside The VM
Section titled “What Should Run Inside The VM”The cleanest microVM model for jackin is not “replace Docker with a VM”. It is:
- run the agent inside the VM
- run a private Docker-compatible engine inside the same VM
- expose the workspace into the VM
- mount or synchronize the persisted Claude/GitHub/plugin state into the VM
That means the VM should contain at least:
- the agent runtime environment
- Claude entrypoint support
- a Docker-compatible daemon such as
dockerdor possiblycontainerd+ compatibility tooling - guest-local writable storage for engine state
This is the closest match to Docker Sandboxes’ architecture while still reusing jackin’s current agent image model.
Why Existing Agent Dockerfiles Still Matter
Section titled “Why Existing Agent Dockerfiles Still Matter”Agent Dockerfiles are still useful because they define the userland environment:
- language runtimes
- development tools
- shell environment
- plugins and conventions
So the likely design is not “replace Dockerfiles with VM images.” It is:
- Dockerfile builds agent filesystem/tooling layer
- microVM provider decides how to execute that layer safely
In other words, the Dockerfile remains the environment definition, while the microVM becomes the runtime boundary.
Providers Considered and Superseded
Section titled “Providers Considered and Superseded”Earlier iterations of this research investigated Kata Containers as a Linux provider and Apple Containerization as a macOS provider. Both are still real options in the abstract, but the April 2026 updates below replace them with two concrete, actively-maintained paths:
- OrbStack with isolated machines for macOS — production-ready, shell-out to
orb, scoped filesystem via v2.1.1+ - libkrun (via
smolvmor direct embedding) for cross-platform — open source, Rust-native, same ecosystem as Podman/crun
The Kata and Apple Containerization notes are not repeated here because:
- Kata integrates naturally through containerd rather than Docker. It remains a credible Linux path if the libkrun direction does not work out, but requires jackin to adopt a containerd-oriented inner engine — a bigger architectural bet than the libkrun path, which keeps OCI images as the packaging format end to end. Kata’s Docker-in-guest storage caveat (
virtio-fsnot usable as the OverlayFS upper layer) is also a real constraint. - Apple Containerization (the Apple
containerCLI /containerizationframework) is a credible macOS path, but since OrbStack’s v2.1.x--isolatedmode now offers the sandboxing semantics the initial research was worried about, adding Apple Containerization as a third macOS option does not pay for its maintenance cost.
If either provider turns out to be necessary later, the backend abstraction below is deliberately shaped so that adding one more provider under the microvm umbrella is a mechanical change, not a design change.
Direct Hypervisor Paths To Defer
Section titled “Direct Hypervisor Paths To Defer”It is possible to imagine jackin owning more of the VM runtime stack directly using technologies like raw KVM, Firecracker, Cloud Hypervisor, or Apple Virtualization.framework.
This is not the recommended first implementation path.
Why to defer it:
- it pushes jackin toward becoming a sandbox runtime product rather than an operator CLI
- it would require jackin to own more low-level VM orchestration concerns directly
libkrunand OrbStack already solve a meaningful portion of that work in their respective environments; embeddinglibkrundirectly (Option B in the April 2026 research below) is a smaller commitment than building from raw hypervisor APIs
This should remain future research unless both the OrbStack provider and the libkrun-wrapper provider prove too limiting.
Comparison To Docker Sandboxes
Section titled “Comparison To Docker Sandboxes”Docker Sandboxes combines three important ideas:
- microVM isolation boundary
- private Docker daemon inside the sandbox
- host-side orchestration and policy layer around the sandbox
The closest jackin implementation would therefore be:
- keep current agent image build flow at first
- run the agent inside a microVM
- provide the Docker-compatible engine inside that VM, not as a host-side sidecar
- track sandbox instances independently of host Docker container naming
That still differs from Docker Sandboxes in one important way during the first phase:
- initial jackin
microvmmode will likely still build images on the host, while Docker Sandboxes keeps the main sandbox execution model fully inside the VM boundary
That is acceptable as an incremental architecture, but it should be described honestly.
Provider Implementation Matrix
Section titled “Provider Implementation Matrix”The microvm user-facing mode routes to one of two providers depending on host. See the April 2026 research sections below for the full design of each.
| Topic | Linux / cross-platform microvm | macOS microvm |
|---|---|---|
| Recommended provider | libkrun via smolvm wrapper | OrbStack isolated machine (v2.1.1+) |
| Best first integration style | shell out to smolvm CLI | shell out to orb CLI |
| Longer-term integration style | embed libkrun directly via C ABI (or consume a future smolvm Rust crate) | continue shell-out; optional Swift helper only if OrbStack limits become constraints |
| Agent image source | host-built OCI image | host-built OCI image |
| Inner engine requirement | none (agent is the workload); add dockerd only if agent runs sibling containers | VM-local dockerd (optional, per agent needs) |
| Workspace model | explicit volume declared on VM launch | selective --share on orb create --isolated |
| Host filesystem default | invisible unless declared | invisible in isolated mode; full /Users in default mode (not used) |
| Main blocker | libkrunfw distribution; image→rootfs pipeline mechanics | no built-in host-side network proxy or credential injection |
| Best first milestone | experimental microvm via smolvm on Linux + macOS | experimental microvm via orb --isolated on macOS |
| License | Apache-2.0 (libkrun, smolvm) | proprietary (OrbStack) |
Key Architecture Changes Required
Section titled “Key Architecture Changes Required”1. Backend Abstraction
Section titled “1. Backend Abstraction”The current load_agent flow needs a backend seam.
Suggested responsibilities:
- repo resolution and validation
- image build
- persisted agent-state preparation
- backend launch/attach/list/eject
Suggested internal shape:
src/backend/mod.rssrc/backend/dind.rssrc/backend/microvm.rs
2. Instance Registry
Section titled “2. Instance Registry”The project should stop treating Docker names as the source of truth.
Persist per-instance metadata such as:
- stable instance ID
- agent selector
- backend kind
- provider kind
- workspace label
- display name
- backend-specific handle
This is required for mixed backends because VM instances will not naturally map to current Docker naming conventions.
3. Workspace Materialization Model
Section titled “3. Workspace Materialization Model”Today workspaces assume direct bind mounts. A VM backend may need a different transport.
The design should introduce a backend-neutral concept such as:
- direct bind mount
- shared filesystem passthrough
- synchronized directory
This does not require changing the user-facing workspace model immediately, but it does require changing the internal representation.
4. Runtime Capability Contract
Section titled “4. Runtime Capability Contract”The product should define what an agent runtime is expected to provide.
At minimum:
- shell execution
- Claude entrypoint support
- Git identity configuration
- plugin bootstrap
- Docker-compatible engine access inside the sandbox, if the backend promises Docker workflows
This prevents future backends from being “supported” in name but missing core behavior.
Platform Support Matrix
Section titled “Platform Support Matrix”- macOS: supported now through existing Docker environments
- Linux: supported now
- Windows/WSL2: possible but still secondary
MicroVM
Section titled “MicroVM”- macOS (Apple Silicon, macOS 14+): primary target is OrbStack isolated machine; libkrun via
smolvmis the open-source alternative - macOS (Intel, macOS 11+): libkrun via
smolvmonly (OrbStack isolated-machine mode is not Intel-limited per se, but Apple Silicon is primary) - Linux + KVM: libkrun via
smolvm - Linux without KVM: fallback to
dindwith a clear message - Windows: fallback to
dind; neither libkrun upstream nor OrbStack target Windows
Suggested Operator Scenarios
Section titled “Suggested Operator Scenarios”Scenario 1: Current Behavior, Explicitly Named
Section titled “Scenario 1: Current Behavior, Explicitly Named”The operator uses:
jackin load agent-smith --sandbox-mode dindOutcome:
- current behavior preserved
- runtime is clearly labeled as container-based
Scenario 2: Stronger Isolation on Mac
Section titled “Scenario 2: Stronger Isolation on Mac”The operator uses:
jackin load the-architect --sandbox-mode microvmOn a Mac with OrbStack installed, jackin selects OrbStack and creates an --isolated machine with selective shares for the workspace and the jackin state directory.
Outcome:
- agent runs inside a VM with a scoped filesystem
- runtime output:
backend: microvm (orbstack)
Scenario 3: Stronger Isolation on Linux or Mac without OrbStack
Section titled “Scenario 3: Stronger Isolation on Linux or Mac without OrbStack”The operator uses:
jackin load the-architect --sandbox-mode microvmOn a Linux host with KVM, or on a Mac without OrbStack, jackin selects smolvm and boots the agent image directly under libkrun.
Outcome:
- agent runs inside a libkrun microVM
- runtime output:
backend: microvm (smolvm)
Scenario 4: Auto Fallback
Section titled “Scenario 4: Auto Fallback”The operator uses:
jackin load agent-smith --sandbox-mode autoOutcome:
- macOS with OrbStack: use
microvm (orbstack) - macOS without OrbStack, Linux with KVM: use
microvm (smolvm) - otherwise: fall back to
dindwith a clear message
Implementation Phases
Section titled “Implementation Phases”Phase 1: Prepare the Codebase
Section titled “Phase 1: Prepare the Codebase”- extract a backend interface from the current runtime flow
- add backend-neutral instance persistence
- fix current launch metadata persistence bugs
- harden DinD transport and cleanup behavior
Phase 2: Ship Explicit DinD Mode
Section titled “Phase 2: Ship Explicit DinD Mode”- introduce CLI/config support for
--sandbox-mode - map
dindto current behavior - keep
microvmhidden or experimental until at least one provider works end to end
Phase 3: First MicroVM Provider — smolvm via libkrun
Section titled “Phase 3: First MicroVM Provider — smolvm via libkrun”- add
src/backend/microvm_smolvm.rsthat shells out tosmolvm - package agent OCI image for smolvm consumption (local registry or docker-archive)
- validate workspace semantics, attach/reconnect, and cold-start latency
- document host requirements (KVM on Linux, macOS 14+ for HVF)
Phase 4: Second MicroVM Provider — OrbStack Isolated Machine
Section titled “Phase 4: Second MicroVM Provider — OrbStack Isolated Machine”- add
src/backend/microvm_orbstack.rsthat drivesorb create --isolatedwith selective shares - route macOS hosts with OrbStack to this provider by default under
auto - surface provider name in runtime output
Phase 5: Docs and Product Positioning
Section titled “Phase 5: Docs and Product Positioning”- update docs to describe
dindvsmicrovm (orbstack|smolvm) - keep the security model blunt and accurate: neither microVM provider today matches Docker Sandboxes’ network-proxy / credential-injection layers
- explain host support and fallback behavior clearly
Risks and Design Traps
Section titled “Risks and Design Traps”- selecting OrbStack’s default machine mode by accident and silently losing the scoped-filesystem property — the provider must always use
--isolated - assuming
smolvm’s shell-out surface is stable across its pre-1.0 releases without pinning - tying instance identity to Docker names when multiple backends exist
- supporting a backend that cannot actually satisfy Docker-based agent workflows (libkrun+smolvm alone does not run sibling Docker containers without extra work)
- claiming feature parity with Docker Sandboxes when the network-policy and credential-injection layers are not yet built
Alternatives Worth Mentioning in the Future Design
Section titled “Alternatives Worth Mentioning in the Future Design”Sysbox
Section titled “Sysbox”Good Linux-only hardening path for dind mode.
- better than privileged DinD
- not a microVM
- useful if the product wants a stronger container boundary without VM orchestration
gVisor
Section titled “gVisor”Good defense-in-depth option, but not the best primary answer for this feature.
- stronger than plain containers
- weaker than microVMs
- not the clearest fit when nested Docker workflows are a core requirement
Other libkrun Wrappers
Section titled “Other libkrun Wrappers”crun --krun: OCI runtime substitute for Podman. Relevant if jackin ever adopts a containerd/Podman-oriented flow; not relevant today.krunvm:docker run-style CLI. Duplicates smolvm’s scope with less packaging polish.krunkit: GPU-forward libkrun VMs on macOS. Out of scope unless agents need GPU acceleration.muvm: Minimal launcher optimized for desktop apps in a VM. Scope mismatch for agent sandboxes.
Kata Containers
Section titled “Kata Containers”Retained as a Linux contingency if the libkrun path hits an unexpected ceiling. Not pursued in the April 2026 plan because containerd-oriented integration is a bigger jackin-side change than the libkrun route.
Apple Containerization
Section titled “Apple Containerization”Retained as a macOS contingency if OrbStack becomes unsuitable. Not pursued in the April 2026 plan because OrbStack v2.1.x now covers the sandboxing case and adding a third macOS provider does not pay for its maintenance cost.
What the Next Agent Should Produce
Section titled “What the Next Agent Should Produce”The next design pass should turn this TODO into a full implementation design with:
- exact CLI and config schema changes
- exact backend trait/module structure
- exact instance registry format
- workspace transport model for VM providers
- persistence policy for user state vs engine state
- provider selection rules for Linux/macOS
- rollout plan for experimental vs stable support
OrbStack VM Backend Research (April 2026)
Section titled “OrbStack VM Backend Research (April 2026)”This section documents research into using OrbStack Linux VMs as a concrete, macOS-focused microvm backend. The section has been updated to reflect OrbStack 2.1.0 (April 2026) which shipped isolated machines without filesystem integration, and OrbStack 2.1.1 which added selective file sharing mounts in isolated machines. Those two features together close the most significant gap flagged in the initial write-up.
Why OrbStack
Section titled “Why OrbStack”OrbStack collapses the macOS microVM story into a single, production-ready tool that already solves most of the hard problems: sub-2-second VM boot, scoped file sharing (as of v2.1.1), networking, and Docker-inside-VM.
The trade-off is that OrbStack is macOS-only, which means it cannot be the universal answer. But since the majority of jackin users are Mac developers, and the security comparison gap with Docker Sandboxes is most relevant on developer laptops, OrbStack is a pragmatic first target — and since v2.1.0 also a defensible one on security grounds.
OrbStack Capabilities Summary
Section titled “OrbStack Capabilities Summary”OrbStack provides lightweight Linux VMs on macOS via the orb CLI.
Key properties relevant to jackin:
- VM lifecycle:
orb create,orb start,orb stop,orb delete— fully scriptable, no GUI needed - Supported distros: 15 distros including Debian (trixie confirmed working), Ubuntu, Alpine, Fedora, Arch, etc.
- Boot time: ~2 seconds — comparable to container startup
- Isolated machines (v2.1.0+): opt-in mode where a machine has no automatic filesystem integration with macOS;
/Usersis not auto-mounted and the host filesystem is invisible by default - Selective file sharing (v2.1.1+): inside an isolated machine, the operator can declare specific host paths to expose — the Docker Sandboxes scoped-mount model, but per-machine
- Non-isolated (default) file sharing: host
/Users/...auto-mounted at the same path inside the VM via VirtioFS - Networking: VM services auto-accessible at
localhost; DNS at{name}.orb.local; full internet access (no built-in host-side proxy) - Docker inside VM: Standard
apt install docker.iogives a full Docker daemon — no sidecar needed - Provisioning: Cloud-init support (
orb create debian:trixie my-vm -c cloud-init.yml) - Resource overhead: Dynamic memory allocation — idle VMs use near-zero resources (v1.7.0+)
- Command execution:
orb -m {name} <command>runs commands inside the VM; SSH also available - Architecture: Apple Silicon primary; Intel Mac supported; Rosetta for x86_64 emulation
- Additional security features: device-mapper encryption (LUKS, dm-verity, dm-integrity) support for guest disks (v2.0.2+), container/machine data sandboxing by macOS (v1.9.1+), trusted
.orb.localcertificates for in-container HTTPS (v1.9.0+)
Proposed OrbStack VM Flow
Section titled “Proposed OrbStack VM Flow”The current DinD flow:
Host Docker → build image → create network → start docker:dind sidecar → start agent container → agent talks to DinD via DOCKER_HOST=tcp://dind:2375 → workspace via bind mountsThe proposed OrbStack VM flow:
Host Docker → build image → export image as tar→ orb create debian:trixie jackin-{name} -c cloud-init.yml→ inside VM: install Docker, load image, start agent container→ agent talks to Docker via unix:///var/run/docker.sock (real daemon, not sidecar)→ workspace via OrbStack's automatic VirtioFS mountKey simplification: inside the VM, there is no need for a DinD sidecar at all. The VM’s own Docker daemon is the isolated engine. The agent can use DOCKER_HOST=unix:///var/run/docker.sock instead of a TCP connection to a sidecar.
Implementation Shape
Section titled “Implementation Shape”The concrete shape below assumes isolated machines with selective shares (v2.1.1+). Any jackin flow that silently creates a default (non-isolated) machine has effectively downgraded the security story and should be treated as a bug.
1. VM Creation and Provisioning
Section titled “1. VM Creation and Provisioning”orb create debian:trixie jackin-{name} \ --isolated \ --share "${WORKSPACE}" \ --share "${HOME}/.jackin/data/jackin-{name}" \ -c /tmp/jackin-cloud-init.ymlThe exact flag spelling for selective shares depends on what OrbStack 2.1.1 settles on; --share is used here as a placeholder for the v2.1.1+ mechanism. The important invariant is: only the workspace and the jackin data directory for this instance are exposed, nothing else under /Users.
Cloud-init template:
#cloud-configpackages: - docker.ioruncmd: - systemctl enable docker - systemctl start docker - usermod -aG docker debian2. Image Transfer
Section titled “2. Image Transfer”Without /Users auto-mount, the image tar must travel through an explicit share or through orb push:
# Build on host (reuse current flow)docker build -t jackin-{slug} ...
# Export to the explicitly-shared jackin data dirdocker save jackin-{slug} -o ~/.jackin/data/jackin-{name}/image.tar
# Inside VM, load from the shared pathorb -m jackin-{name} docker load -i /Users/{user}/.jackin/data/jackin-{name}/image.tarAlternative, without shares: orb push jackin-{name} ~/.jackin/cache/jackin-{slug}.tar /tmp/image.tar — explicit transfer, slower but makes the image-flow surface auditable.
3. Agent Launch Inside VM
Section titled “3. Agent Launch Inside VM”orb -m jackin-{name} docker run -it --name agent \ -e DOCKER_HOST=unix:///var/run/docker.sock \ -e GIT_AUTHOR_NAME="{git_user_name}" \ -e GIT_AUTHOR_EMAIL="{git_user_email}" \ -e JACKIN=1 \ -v /Users/{user}/Projects/myapp:/Users/{user}/Projects/myapp \ -v /Users/{user}/.jackin/data/jackin-{name}/.claude:/home/agent/.claude \ -v /Users/{user}/.jackin/data/jackin-{name}/.claude.json:/home/agent/.claude.json \ -v /Users/{user}/.jackin/data/jackin-{name}/.config/gh:/home/agent/.config/gh \ jackin-{slug}These -v paths must match the shares declared at orb create --isolated time; anything else will simply not exist inside the VM.
4. Attach / Detach
Section titled “4. Attach / Detach”# Reattach to running agentorb -m jackin-{name} docker attach agent5. Cleanup
Section titled “5. Cleanup”# Stop VM (preserves state for fast restart)orb stop jackin-{name}
# Or delete entirelyorb delete jackin-{name}Suggested User-Facing Name and Provider Selection
Section titled “Suggested User-Facing Name and Provider Selection”An earlier iteration of this research proposed exposing orbstack-vm as a top-level backend name. The updated recommendation — aligned with the libkrun+smolvm research in the next section — is to keep microvm as the user-facing mode and treat the specific provider (OrbStack, smolvm) as an internal selection:
dind— current behavior, Docker-in-Docker sidecar (cross-platform)microvm— provider chosen by jackin based on host: OrbStack isolated machine on Mac with OrbStack installed; libkrun viasmolvmotherwise
Config shape:
[runtime]default_sandbox_mode = "dind" # or "microvm" or "auto"# optional provider overridemicrovm_provider = "auto" # or "orbstack" or "smolvm"CLI shape:
jackin load agent-smith --sandbox-mode dindjackin load agent-smith --sandbox-mode microvmjackin load agent-smith --sandbox-mode microvm --microvm-provider orbstackjackin load agent-smith --sandbox-mode autoRuntime output:
backend: dindbackend: microvm (orbstack, debian trixie)backend: microvm (smolvm)
This keeps the operator UX stable while letting jackin pick the right provider for the host. The provider name is visible in the runtime output so operators can confirm what they got.
Architecture Comparison: DinD vs OrbStack Isolated Machine
Section titled “Architecture Comparison: DinD vs OrbStack Isolated Machine”| Aspect | DinD (current) | OrbStack isolated machine (proposed) |
|---|---|---|
| Isolation boundary | Container (shared host kernel, --privileged DinD) | Full VM (separate kernel, separate userland) |
| Docker access | TCP sidecar (tcp://dind:2375), no TLS | Native daemon (unix:///var/run/docker.sock) |
| Workspace delivery | Direct host bind mounts | Selective --share into isolated machine (v2.1.1+) |
| State persistence | Host dirs mounted into container | Explicitly-shared host dirs mounted into container inside VM |
| Container escape risk | Privileged sidecar = root-equivalent host access | Escape stays inside VM boundary |
| Startup time | Seconds (container pull + DinD readiness) | ~2s VM boot + ~30-60s first-time provisioning |
| Platform | macOS, Linux, WSL2 | macOS only |
| External dependency | Docker (already required) | Docker + OrbStack |
Docker Sandboxes Deep Comparison
Section titled “Docker Sandboxes Deep Comparison”Docker Sandboxes provides the most complete agent sandbox model available today. sbx itself does not require Docker Desktop, although custom template builds do. Understanding that split is essential for honest positioning of any jackin backend: the useful comparison is the sandbox architecture and operator surface, not an assumption that every Docker-owned capability maps to jackin.
Docker Sandboxes Architecture
Section titled “Docker Sandboxes Architecture”Docker Sandboxes implements four distinct isolation layers:
-
Hypervisor isolation: Each sandbox runs in a lightweight microVM with its own Linux kernel. Uses Apple Virtualization.framework on macOS, Hyper-V on Windows. Processes inside the VM are invisible to the host and other sandboxes.
-
Filesystem isolation: Only the declared workspace directory is shared via filesystem passthrough. The workspace is mounted at the same absolute path inside the sandbox. Symlinks pointing outside the workspace scope are not followed. The rest of the host filesystem is completely invisible.
-
Network isolation: All HTTP/HTTPS traffic routes through a host-side proxy. Raw TCP, UDP, and ICMP are blocked at the network layer. Traffic to private IPs, loopback, and link-local addresses is prohibited. Sandboxes cannot reach each other or the host’s localhost. Only domains explicitly listed in network policies are reachable.
-
Credential isolation: The host-side proxy intercepts outbound API requests and injects authentication headers (API keys, tokens). Credential values never enter the VM. The proxy acts as a MITM for HTTPS, terminating TLS and re-encrypting with its own CA, allowing policy enforcement and credential injection.
Each sandbox also gets its own dedicated Docker Engine, completely isolated from the host Docker daemon. The agent cannot mount the host Docker socket.
Docker Sandboxes CLI
Section titled “Docker Sandboxes CLI”# Basic usagedocker sandbox run claude .
# With extra read-only workspacesdocker sandbox run claude ~/project-a ~/shared-libs:ro ~/docs:ro
# Named sandboxdocker sandbox run --name my-project claude .
# Custom base imagedocker sandbox run --template python:3-alpine claude .
# With agent argumentsdocker sandbox run claude . -- -p "What version are you running?"
# Lifecycledocker sandbox lsdocker sandbox rm my-projectWorkspaces are mounted at the same absolute path as on the host. Additional workspaces can be appended as arguments with optional :ro suffix.
Side-by-Side: Docker Sandboxes vs OrbStack VM (Isolated Machine Mode)
Section titled “Side-by-Side: Docker Sandboxes vs OrbStack VM (Isolated Machine Mode)”| Isolation Layer | Docker Sandboxes | OrbStack isolated machine (v2.1.1+) | Gap |
|---|---|---|---|
| VM boundary | microVM per sandbox, own kernel | Full Linux VM per agent, own kernel | Equivalent |
| Filesystem scope | Only declared workspace shared; symlinks outside scope blocked | No auto-mount; selective per-machine shared folders (v2.1.1+) | Essentially equivalent |
| Network | HTTP/HTTPS only via host proxy; raw TCP/UDP/ICMP blocked; domain allowlist | Full unrestricted network access; iptables in-guest is operator-configurable | Significant gap |
| Credentials | Host proxy injects auth headers; keys never enter VM | Keys passed as env vars or mounted files; they enter the VM | Significant gap |
| Inner Docker | Separate dockerd per sandbox; no host socket access | Separate dockerd in VM; no host socket access | Equivalent |
| Platform | macOS + Windows | macOS only | Minor gap |
Honest Security Positioning
Section titled “Honest Security Positioning”| Backend | Security model |
|---|---|
dind (current) | Container isolation with privileged sidecar — weakest boundary |
| OrbStack default machine | VM kernel boundary, but full /Users auto-mount — strong kernel isolation, weak filesystem isolation |
| OrbStack isolated machine (v2.1.0+) with selective shares (v2.1.1+) | VM kernel boundary + scoped filesystem — comparable to Docker Sandboxes on boundary and filesystem, still missing network proxy and credential injection |
| Docker Sandboxes | VM + scoped filesystem + network proxy + credential isolation — strongest |
The OrbStack VM backend provides a real and meaningful security improvement over DinD. The --privileged flag on the current DinD sidecar effectively gives the agent root-equivalent access to the host kernel. With OrbStack VMs, even a container escape stays inside the VM boundary.
As of v2.1.1 the filesystem-scope gap that previously distinguished OrbStack from Docker Sandboxes is substantially closed for any jackin flow that opts into --isolated and declares shares explicitly. The remaining gaps are network policy and credential injection, not VM isolation or filesystem scope.
Known Limitations and Gaps
Section titled “Known Limitations and Gaps”Filesystem: Default Machines Still Auto-Mount /Users
Section titled “Filesystem: Default Machines Still Auto-Mount /Users”The default (non-isolated) OrbStack machine mode still auto-mounts the macOS /Users directory via VirtioFS. For jackin’s use case this is not the right default: agents should not see every project on the host just because they happen to boot in a VM.
The practical resolution, as of April 2026, is to always create jackin’s machines with --isolated and then selectively share only the active workspace:
- v2.1.0 (April 19, 2026): “Isolated machines without filesystem integration” — an isolated machine has no automatic
/Usersmount and the host filesystem is not visible - v2.1.1 (April 20, 2026): “Selective file sharing mounts in isolated machines” — inside an isolated machine, specific host paths can be declared as shares
Historical context (kept for audit trail — these are now resolved upstream by OrbStack v2.1.x, so the roadmap no longer links to the old tracking issues):
- orbstack/orbstack#169 (2023) — the long-standing request that tracked this. The feature shipped as “isolated machines” + “selective shares” in v2.1.x.
- orbstack/orbstack#1243 (2024) — closed as duplicate.
- orbstack/orbstack#2308 (January 2026) — superseded by v2.1.x.
Implication for jackin: the orbstack provider should default to --isolated and reject any flow that would fall back to the default machine mode, unless the user opts out explicitly. The umount /Users workaround is no longer needed.
Network: No Built-in Host-Side Proxy
Section titled “Network: No Built-in Host-Side Proxy”OrbStack VMs have full unrestricted network access by default, in both regular and isolated modes. A compromised agent could exfiltrate data to any endpoint. Partial mitigation is possible via iptables rules configured inside the VM through cloud-init, but this is not the same defense-in-depth as Docker Sandboxes’ host-side HTTPS proxy with domain allowlist.
This gap is unchanged by v2.1.x and is one of the two remaining structural differences with Docker Sandboxes.
Credentials: Must Enter the VM
Section titled “Credentials: Must Enter the VM”Without a host-side proxy to inject credentials, API keys and tokens must be passed as environment variables or mounted files. A compromised agent has direct access to them.
Building a proxy similar to Docker Sandboxes’ approach would be a significant project. Short-term, credentials in the VM is the pragmatic path. This gap is also unchanged by v2.1.x.
Resource Limits Are Global
Section titled “Resource Limits Are Global”OrbStack’s CPU and memory limits are global across all VMs, not per-machine. A runaway agent in one VM could affect others. This is acceptable for developer workstations but worth documenting.
macOS Only
Section titled “macOS Only”OrbStack does not run on Linux. Linux users need a different microvm provider — the libkrun + smolvm track documented in the next section.
Closing the Gaps Incrementally
Section titled “Closing the Gaps Incrementally”After v2.1.x, the residual gaps between the OrbStack isolated-machine backend and Docker Sandboxes are fewer but still real:
-
Filesystem scope (was critical, now substantially closed): Addressed by
--isolated+ selective shares. Any open issue here is ergonomic (how jackin declares shares at machine-create time) rather than structural. -
Network policy (significant): jackin could configure iptables inside the VM via cloud-init to restrict outbound traffic. A basic allowlist of required domains (GitHub, Claude API, npm/PyPI registries) would cover the most important cases. Still weaker than a host-side HTTPS proxy because enforcement lives inside the guest.
-
Credential injection (significant): A lightweight host-side proxy that intercepts outbound HTTPS from the VM and injects auth headers is a longer-term project. It would require intercepting traffic at the OrbStack network layer or configuring the VM to route through a local proxy.
-
Per-VM resource limits: Not solvable at the jackin level while OrbStack only supports global limits.
Open Design Questions
Section titled “Open Design Questions”-
VM lifecycle strategy: Create/delete per session, or keep VMs alive and stop/start? Keeping alive is faster but consumes disk. Recommendation: stop/start by default, delete on explicit
jackin eject --purge. -
Image transfer optimization:
docker save+ load through shared filesystem is simple but slow for large images. Could explore running a local registry that the VM pulls from, or using OrbStack’s Docker integration for image sharing. -
Attach UX: Currently
docker attachgives the terminal directly. With OrbStack VM, it becomesorb -m jackin-{name} docker attach agent— an extra layer. Need to verify that Ctrl+P,Q detach works through theorbwrapper. -
State persistence: Should
~/.jackin/data/{name}be mounted into the VM’s agent container via VirtioFS (same path), or should state live inside the VM? VirtioFS mount is simpler and consistent with current behavior. -
Multiple agents: One VM per agent (cleaner isolation, higher resource use) or one shared VM (more efficient, weaker isolation between agents)?
-
First-launch latency: First VM creation downloads a distro image (~1 min) and cloud-init installs Docker (~30-60s). Mitigations: pre-warm a jackin VM template, or keep VMs alive across sessions.
OrbStack Provider Matrix
Section titled “OrbStack Provider Matrix”| Topic | dind | microvm via OrbStack (isolated, v2.1.1+) |
|---|---|---|
| Platform | macOS, Linux, WSL2 | macOS only |
| External dependency | Docker | Docker + OrbStack 2.1.1+ |
| Isolation boundary | Container (privileged) | VM (Apple VZ), own kernel |
| Inner Docker | TCP sidecar | Native daemon in VM |
| Workspace model | Host bind mount | Selective shares into isolated machine |
| Host filesystem exposure | Full access | Only explicitly declared shares |
| Network policy | None | In-guest iptables (operator-configurable) |
| Provisioning | None needed | Cloud-init |
| VM distro | N/A | Debian trixie (recommended) |
| First integration style | Current code, refactored | Shell out to orb CLI |
| Main residual blocker | None (already working) | No host-side network proxy or credential injection |
| Best first milestone | Refactored into backend trait | Experimental OrbStack provider under microvm mode |
libkrun Foundation and smolvm Wrapper (April 2026)
Section titled “libkrun Foundation and smolvm Wrapper (April 2026)”This section supersedes earlier research into Kata Containers, Apple Containerization, and auser/mvm. It narrows the microvm backend story to two concrete, currently-maintained options: OrbStack with isolated machines (covered in the section above) and a libkrun-based wrapper (covered here). Both target the same product outcome; they differ on platform reach, openness, and engineering commitment.
Why libkrun Is the Right Foundation Layer
Section titled “Why libkrun Is the Right Foundation Layer”libkrun is a dynamic library that adds a minimal Virtual Machine Monitor to a host process and runs workloads inside it. It is not itself a CLI or a product — it is a library (libkrun.so / libkrun.dylib) that other tools embed to get microVM capabilities without reimplementing the hypervisor integration.
For jackin this is the right level of abstraction to care about because:
-
It is the upstream that a growing cluster of tools converge on.
crun --krun,krunvm,krunkit,muvm, andsmolvmall wrap libkrun. Choosing a libkrun-based wrapper today does not lock jackin into one small project; it plugs into a whole ecosystem that has already done the hypervisor portability work. -
It already covers the platforms jackin cares about. KVM on Linux and HVF on macOS (ARM64, macOS 14+) are in upstream. Windows is not — libkrun-based wrappers inherit that.
-
It is open source (Apache-2.0). Maintained under the
containersGitHub organization (Podman / crun / Buildah ecosystem), with active releases —libkrun-1.17.4shipped February 18, 2026. -
It is Rust. 91% Rust, with a stable C ABI for language interop. Embedding it from jackin would be a supported path, not a workaround.
What libkrun Actually Provides
Section titled “What libkrun Actually Provides”libkrun integrates code from Firecracker, rust-vmm, and Cloud Hypervisor into a single linkable library that exposes a small C API and several virtio devices:
- Guest init: boots a custom guest via a statically-linked C init binary (not a full systemd distro)
- Storage:
virtio-fsfor host directory passthrough;virtio-blockfor block-device rootfs - Networking: two mutually exclusive modes —
- virtio-vsock + TSI (Transparent Socket Impersonation): guest TCP/UDP syscalls are transparently redirected through vsock to a host-side proxy — no guest network interface, no NAT, no bridges; requires libkrun’s kernel (
libkrunfw) - virtio-net + passt / gvproxy: conventional virtual NIC with a userspace network proxy on the host
- virtio-vsock + TSI (Transparent Socket Impersonation): guest TCP/UDP syscalls are transparently redirected through vsock to a host-side proxy — no guest network interface, no NAT, no bridges; requires libkrun’s kernel (
- Guest communication:
virtio-vsockend-to-end - Security variants: generic, AMD SEV/SEV-ES/SEV-SNP (
libkrun-sev), Intel TDX (libkrun-tdx), macOS EFI (libkrun-efi) - Optional devices: virtio-gpu with Venus/native-context acceleration, virtio-balloon with free-page reporting, virtio-rng, virtio-snd, virtio-console
The novel piece for sandbox use cases is TSI. It reframes guest networking as host-mediated syscall forwarding, which means the host can apply per-guest network policy (allowlists, port whitelists, TLS termination) without guest cooperation and without exposing a routable network to the guest at all. For agent sandboxes this is the closest open-source analog to Docker Sandboxes’ host-side HTTPS proxy model.
Why Wrappers Are Required
Section titled “Why Wrappers Are Required”libkrun does not by itself know how to:
- turn an OCI image into a bootable guest filesystem
- manage lifecycle (start/stop/restart/list)
- deliver workspaces, secrets, or state into the guest
- provide an interactive PTY from the host side
Every real user consumes libkrun through a wrapper that fills those gaps. The wrappers divide roughly by purpose:
| Wrapper | Purpose | Scope |
|---|---|---|
crun --krun | OCI runtime substitute for Podman/Buildah | Drop-in container-engine backend |
krunvm | docker run-style CLI for ad-hoc VMs | Developer-facing ad-hoc VMs |
krunkit | GPU-forward libkrun VMs on macOS | Graphics/AI workloads |
muvm | Minimal userspace microVM launcher | Desktop Linux apps inside a VM |
smolvm | Portable, OCI-consuming VM runtime with packaged .smolmachine files | General-purpose portable VMs |
For jackin the candidate wrappers reduce to smolvm or a jackin-specific wrapper built directly on the libkrun C API. crun --krun is tied to Podman’s container workflow, which jackin does not use. krunvm and muvm are closer to developer conveniences than runtimes jackin would embed. krunkit solves a different problem.
smolvm as the Example libkrun Wrapper
Section titled “smolvm as the Example libkrun Wrapper”smolvm from the smol-machines organization is the closest off-the-shelf analog to what jackin’s microvm backend would need to do. It sits on top of libkrun (plus libkrun’s custom kernel libkrunfw), adds an OCI image pipeline, and wraps the whole thing in a CLI.
Relevant properties:
- Host platforms: macOS 11+ on Apple Silicon and Intel; Linux x86_64 and aarch64 with KVM
- Guest: Linux matching the host arch, booted from OCI images pulled from Docker Hub / GHCR
- Boot time: under 200 ms cold start
- Packaging:
.smolmachinefile format — a single-file, stateful VM artifact that can be moved between hosts - Volumes: directory mounts only (no single-file mounts); host paths declared per-invocation
- Networking: opt-in via
--net, TCP/UDP only (no ICMP), host egress allowlist supported, SSH agent forwarding from host - SSH keys: private keys stay on the host; smolvm forwards the agent socket rather than copying keys in
- License: Apache 2.0
- Language: Rust (83%), shell (15%), TypeScript (2%)
- Maintenance: latest release
v0.5.19on April 18, 2026; 41 releases; 515 commits on main; 2.3k stars
smolvm is explicitly positioned as a runtime, not a platform — “one CLI, OCI in, VM out”. That scope alignment matters: jackin can shell out to smolvm for the microVM plumbing and keep all the agent/workspace/plugin concepts in jackin’s own layer.
Three Integration Options for jackin
Section titled “Three Integration Options for jackin”Option A: Shell out to smolvm
Section titled “Option A: Shell out to smolvm”jackin treats smolvm as an external CLI, similar to how the current implementation shells out to docker.
// pseudocodeCommand::new("smolvm") .arg("run") .args(["--image", &agent_image]) .args(["--volume", &format!("{ws}:{ws}")]) .args(["--net", "allowlist"]) .args(["--allow", "api.anthropic.com"]) .args(["--allow", "github.com"]) .arg(&instance_name) .spawn()?;Pros:
- lowest implementation cost — reuses smolvm’s lifecycle, image handling, and OCI fetching
- no FFI, no C-API bindings, no libkrun kernel management
- tracks smolvm’s upstream improvements for free
Cons:
- inherits smolvm’s CLI and image model even when jackin would prefer a different shape
- lifecycle/attach semantics are constrained by what the CLI exposes
- adds a non-Rust dependency surface (binary install) on the user
This is the right shape for the first prototype. It validates the libkrun approach end-to-end without committing jackin to hypervisor-adjacent code.
Option B: Embed libkrun directly via its C API
Section titled “Option B: Embed libkrun directly via its C API”jackin links against libkrun and libkrunfw and drives the VM lifecycle itself.
Pros:
- full control over guest boot, device configuration, TSI policy, and PTY wiring
- can align the sandbox model exactly with jackin’s agent/workspace contract
- removes the smolvm CLI dependency and ships as a single Rust binary
Cons:
- substantial engineering: C-ABI bindings, kernel distribution (
libkrunfwships as a shared object too), root-filesystem assembly, guest init contract - jackin effectively becomes a libkrun wrapper, which broadens the maintenance footprint
- no existing OCI-to-libkrun-rootfs pipeline in Rust — would need to build one or borrow from smolvm
This is the right shape for a long-term primary backend, only if the first prototype proves the approach is right and the shell-out boundary becomes the bottleneck.
Option C: Track smolvm for an embeddable Rust SDK
Section titled “Option C: Track smolvm for an embeddable Rust SDK”smolvm is 83% Rust and organized as a Cargo workspace. It currently ships a CLI, not a stable library crate. If the smol-machines organization publishes an embeddable smolvm-core (similar to what auser/mvm attempted), jackin could depend on it directly — getting the convenience of a prebuilt OCI-to-microVM pipeline with the control of in-process Rust APIs.
This is not available today. It is worth monitoring as a middle path between Option A and Option B.
Wrapping It Up: Rootfs and Agent Image Flow
Section titled “Wrapping It Up: Rootfs and Agent Image Flow”The current jackin agent image is an OCI image built with host Docker. smolvm’s OCI-consuming flow preserves that contract directly:
[Host] docker build → jackin-{slug} image (OCI)[Host] docker push → local registry (or use docker-archive)[Host] smolvm run jackin-{slug} --volume {ws}:{ws} --net allowlist → VM[VM] guest init → jackin entrypoint → ClaudeThis removes the entire “inner Docker daemon” question that previous research variants (OrbStack VM, mvm) had to solve. If the agent itself does not need to run sibling Docker containers, libkrun-based wrappers do not need an inner dockerd at all — the agent is the single workload in its own VM. That is a meaningful simplification over the DinD and OrbStack-VM flows.
If an agent does need sibling Docker workflows (uncommon but real), the options are:
- run a nested
dockerdinside the guest (the same trade-off DinD already has) - expose a jackin-managed sidecar VM for the sibling workload
- decline Docker-in-microVM for those agents and recommend they run under the
dindbackend
For the first phase, option 3 is acceptable.
Side-by-Side: libkrun/smolvm vs OrbStack (with Isolated Machines) vs Docker Sandboxes
Section titled “Side-by-Side: libkrun/smolvm vs OrbStack (with Isolated Machines) vs Docker Sandboxes”| Aspect | libkrun + smolvm | OrbStack (isolated machine) | Docker Sandboxes |
|---|---|---|---|
| Platform | macOS 11+, Linux (KVM), aarch64/x86_64 | macOS only | macOS + Windows |
| Hypervisor | libkrun over KVM / HVF | Apple Virtualization.framework | Apple VZ / Hyper-V |
| Filesystem model | Explicit directory volumes only | Explicit shared folders (v2.1.1+) | Declared workspace(s) only |
| Network model | Opt-in; TSI + host allowlist | Standard VM networking | Host proxy with domain allowlist |
| Credential model | Env/volumes; SSH agent forwarded, keys stay on host | Env/volumes; keys enter VM | Proxy-injected headers; keys never enter VM |
| Inner Docker required? | No (agent is the workload) | No (per-machine dockerd if needed) | No (separate engine per sandbox) |
| Guest communication | vsock (via smolvm / libkrun) | orb CLI / SSH | Docker SDK / proxy |
| Image format | OCI directly | Standard distro + cloud-init | OCI directly |
| Dependency | smolvm CLI, libkrun, libkrunfw | OrbStack app (commercial) | Docker Desktop (commercial) |
| License | Apache 2.0 (all) | Proprietary | Proprietary |
| Maturity for jackin | Newer; requires integration work | Production-ready; shell-out is trivial | Mature; tightest model but not embeddable |
Why the Landscape Actually Shows Two Viable Paths
Section titled “Why the Landscape Actually Shows Two Viable Paths”The honest reading of the April 2026 landscape for jackin is that OrbStack and libkrun-based wrappers are not competing for the same slot — they are two different answers to two different questions:
- If the goal is “give Mac users a stronger local sandbox tomorrow, with minimal jackin engineering”: OrbStack isolated machines. Production-ready, the security gap that previously ruled it out is closed (v2.1.0 / v2.1.1), integration is shell-out to
orb, and users who already have OrbStack installed get it for free. - If the goal is “give every user on every platform a stronger sandbox that jackin owns end-to-end”: libkrun via smolvm, then eventually directly. Open source, Apache 2.0, cross-platform, same language as jackin, and aligned with the broader container/VM ecosystem (crun, Podman, Buildah) rather than a single commercial product.
Those two paths can coexist in the backend abstraction. The microvm user-facing mode can route to orbstack on macOS hosts that have it, and to smolvm on Linux and on macOS hosts that don’t. Users pick the mode; jackin picks the provider.
Security Positioning (Revised)
Section titled “Security Positioning (Revised)”| Backend | Isolation | Filesystem | Network | Credentials | Rating |
|---|---|---|---|---|---|
dind | Container (privileged sidecar) | Full host via bind mounts | Unrestricted | In container | Basic |
microvm via OrbStack isolated machine | VM (Apple VZ) | Only declared shared folders | Unrestricted by default | Env/volumes enter VM | Strong (macOS) |
microvm via libkrun/smolvm | VM (KVM/HVF) | Only declared volumes | TSI + host allowlist | Env/volumes enter VM; SSH keys stay on host | Strong (cross-platform) |
| Docker Sandboxes | VM (Apple VZ / Hyper-V) | Only declared workspace | Host HTTPS proxy, domain allowlist | Host proxy injects, never in VM | Strongest |
Both microvm provider paths close the two biggest DinD gaps (shared kernel, privileged sidecar). Neither matches Docker Sandboxes’ credential injection today. The libkrun path has a more plausible route to closing the network-policy gap because TSI is a host-side syscall mediator by design — the host could implement the proxy behavior itself, independent of guest cooperation. That is a future project, not a first-phase commitment.
Revised Implementation Phases (libkrun + smolvm track)
Section titled “Revised Implementation Phases (libkrun + smolvm track)”Phase 1: Backend Abstraction
Section titled “Phase 1: Backend Abstraction”- Extract a backend trait from the current runtime flow
- Move current code into
src/backend/dind.rs - Introduce
--backendCLI flag:dind,microvm,auto - Define a backend-neutral instance registry (not keyed on Docker container names)
Phase 2: First microvm Provider — smolvm Shell-Out
Section titled “Phase 2: First microvm Provider — smolvm Shell-Out”- Detect
smolvmavailability (smolvm --version) and libkrun/libkrunfw host requirements - Implement a
src/backend/microvm_smolvm.rsthat shells out tosmolvm run/stop/attach - Image flow: reuse current Docker build, either push to a local registry smolvm can pull from or export via docker-archive and feed to smolvm
- Workspace: declared as an explicit volume at the same absolute path
- Attach: validate smolvm’s attach semantics for an interactive Claude session
Phase 3: Second microvm Provider — OrbStack Isolated Machine
Section titled “Phase 3: Second microvm Provider — OrbStack Isolated Machine”- Implement
src/backend/microvm_orbstack.rsthat drivesorb create --isolatedand sets up selective shared folders - Reuse the same backend-neutral instance registry
- Surface the backend choice in runtime output:
backend: microvm (orbstack)/backend: microvm (smolvm)
Phase 4: Auto-Selection and Fallback
Section titled “Phase 4: Auto-Selection and Fallback”autopicks OrbStack when the host has it and the user is on macOS, otherwise smolvm, otherwise falls back todindwith an explicit message- Allow explicit provider override for users who want to force one or the other
Phase 5: Security Hardening
Section titled “Phase 5: Security Hardening”- smolvm provider: enable host egress allowlist by default with a sensible baseline (Anthropic API, GitHub, language package registries)
- OrbStack provider: default to
--isolatedwith explicit--shareentries; never auto-mount/Users - Document that neither backend currently matches Docker Sandboxes’ credential proxy
Phase 6: Longer-Horizon Options
Section titled “Phase 6: Longer-Horizon Options”- Evaluate promoting the smolvm provider from shell-out to direct libkrun integration (Option B) if the CLI boundary becomes a product constraint
- Track smol-machines for an embeddable Rust crate (Option C)
- Investigate a host-side HTTPS proxy built on top of TSI to close the credential-injection gap
Open Questions
Section titled “Open Questions”- Image transfer mechanics for smolvm. Is the cleanest path a local OCI registry (jackin runs one on the host for VMs to pull from) or docker-archive + side-channel import? A local registry is simpler for users but adds a background service.
- Interactive attach semantics. smolvm’s attach/exec behavior needs to be validated for Claude’s interactive session — specifically Ctrl+P/Ctrl+Q detach, window resize forwarding, and reconnect.
- macOS host matrix. smolvm requires macOS 11+; libkrun’s HVF backend requires macOS 14+. The effective minimum for jackin’s libkrun track is macOS 14+.
libkrunfwdistribution. The custom guest kernel ships as a shared library. Packaging this for end users (Homebrew tap? bundled with jackin?) is an open question for direct-integration paths.- Outer-DinD agents. Some current agent images rely on nested Docker. Is the answer to route those to
dindbackend only, or to invest in nested dockerd inside the microVM? First-phase recommendation: route them todind.
Related Files
Section titled “Related Files”src/runtime/launch.rs- current launch orchestrationsrc/runtime/attach.rs- attach and hardline behaviorsrc/runtime/cleanup.rs- eject, purge, orphan GCsrc/docker.rs- Docker CLI execution modelsrc/instance/naming.rs- persisted state and Docker-style naming assumptionssrc/instance/auth.rs- state preparation that assumes container semanticssrc/workspace/resolve.rs,src/workspace/mounts.rs- workspace mount resolutionsrc/config/mod.rs- future config surface for sandbox modesrc/derived_image.rs- derived runtime layer generationdocker/construct/Dockerfile- current Docker-oriented construct contractdocker/runtime/entrypoint.sh- current runtime entrypoint behavior- Architecture - current architecture story
- Security model - current security boundary statement
- Comparison - comparison with Docker Sandboxes