Skip to content

Selectable Sandbox Backends: DinD and MicroVM

Status: Deferred - requires a dedicated design pass before implementation

jackin currently has exactly one runtime model:

  • build the agent image on the host Docker engine
  • create a per-agent Docker network
  • start a privileged docker:dind sidecar
  • start the agent container on that network
  • point the agent at the sidecar with DOCKER_HOST=tcp://...:2375

That model is coherent, but it leaves a product gap against microVM-based tools such as Docker Sandboxes. Operators who want stronger local isolation cannot choose a hypervisor-backed runtime, and operators who are happy with Docker cannot explicitly select and manage the current mode as a first-class feature.

The requested feature is a single product-level capability:

  • the operator should be able to choose how an agent is loaded into a workspace
  • the two supported approaches should be dind and microvm
  • the same agent/workspace concepts should continue to work in both modes
  • dind is the shortest path and already matches the current architecture
  • microvm is the path that narrows the gap with Docker Sandboxes
  • a user-visible mode switch makes the isolation tradeoff explicit instead of implicit
  • the project can keep its current Docker-first ergonomics while adding a stronger boundary where the host supports it

Today the runtime is tightly coupled to Docker and DinD:

Important current assumptions:

  • the agent can talk to a Docker-compatible daemon from inside its sandbox
  • workspace access is delivered through direct host bind mounts
  • agent state persistence is separate from runtime filesystem persistence
  • runtime attach/eject/list behavior is discovered from Docker container state

These assumptions are reasonable for dind, but they are not backend-neutral.

Add a first-class sandbox mode abstraction with these operator-visible outcomes:

  1. The operator can choose dind, microvm, or auto.
  2. Existing role repos remain usable without forcing every agent author to redesign their Dockerfile.
  3. Workspaces, mounts, last-used agent tracking, and persisted Claude/GitHub state continue to work in both modes.
  4. Unsupported hosts fail clearly or fall back intentionally rather than half-working.
  • Replacing Docker-based image builds in the first phase
  • Designing a cloud sandbox product
  • Guaranteeing identical low-level runtime behavior across all providers
  • Claiming that a hardened container runtime is equivalent to a microVM

The feature should be visible in three places:

Examples:

Terminal window
jackin load agent-smith --sandbox-mode dind
jackin load agent-smith --sandbox-mode microvm
jackin load agent-smith --sandbox-mode auto

Suggested global shape:

[runtime]
default_mode = "auto"
microvm_provider = "auto"
persist_engine_state = false

Suggested workspace override shape:

[workspaces.big-monorepo.runtime]
mode = "microvm"

The launch summary should tell the operator which backend is being used, for example:

  • sandbox mode: dind
  • sandbox mode: microvm (kata)
  • sandbox mode: microvm (apple)

Option 1: DinD Only, Hardened and Explicit

Section titled “Option 1: DinD Only, Hardened and Explicit”

This makes the current architecture first-class without adding a second backend yet.

Pros:

  • smallest implementation
  • low migration risk
  • immediately improves current security posture if TLS/rootless work is added

Cons:

  • does not solve the Docker Sandboxes comparison gap
  • still shares the host kernel

Option 2: Two User Modes, One Generic MicroVM Abstraction

Section titled “Option 2: Two User Modes, One Generic MicroVM Abstraction”

Expose dind and microvm at the product level. Under microvm, pick an implementation per platform.

Recommended provider strategy:

  • macOS with OrbStack installed: OrbStack isolated machine (v2.1.1+)
  • macOS without OrbStack, or Linux with KVM: libkrun via the smolvm wrapper
  • unsupported host: explicit fallback to dind or hard failure

Pros:

  • matches the user-facing requirement directly
  • keeps the UX stable while allowing different providers underneath
  • avoids leaking a specific provider name into the top-level product contract

Cons:

  • larger design surface
  • requires backend-neutral lifecycle management

Expose dind, kata, apple, and later other providers.

Pros:

  • transparent about implementation

Cons:

  • leaks infrastructure choices into the operator UX
  • makes cross-platform defaults and docs more complex
  • encourages provider-specific branching too early

Choose Option 2.

Make microvm the user-facing mode and treat the actual provider as an internal decision. This keeps the feature aligned with the isolation model the operator cares about while preserving room for different host-specific implementations.

Critical Compatibility Question: Will Existing Agent Dockerfiles Work?

Section titled “Critical Compatibility Question: Will Existing Agent Dockerfiles Work?”

Mostly yes for the agent images themselves.

The current role repo contract only requires that the final stage use the construct image. The existing agent Dockerfiles are standard OCI-style environment definitions and should remain valid inputs for both dind and microvm modes.

The real compatibility issue is not the Dockerfiles. It is the runtime contract around them:

  • current agents expect Docker CLI tooling in the sandbox
  • current launch flow injects DOCKER_HOST
  • current runtime assumes direct bind mounts and docker attach

So the correct conclusion is:

  • agent Dockerfiles are reusable
  • the runtime backend is what must change
  • the product should preserve a Docker-compatible inner engine where possible so current agents continue to function

dind mode should formalize and harden the current design.

Required improvements:

  • move the current runtime into a named backend module
  • stop using unauthenticated plain TCP DinD
  • prefer TLS or a private socket transport
  • add a backend-neutral instance registry instead of inferring all state from Docker names
  • only persist last-used agent metadata on successful launch

Possible hardening layers to evaluate:

  • docker:dind-rootless
  • sysbox-runc on Linux hosts

sysbox is especially relevant as a Linux-only improvement path because it can support Docker-in-Docker without the usual privileged container model. It is not a microVM and should not be presented as one.

microvm mode should provide a stronger isolation boundary while keeping the same high-level operator workflow.

Required properties:

  • private engine inside the VM boundary
  • reusable agent image or equivalent runnable artifact
  • workspace access delivered into the VM
  • persisted Claude/GitHub/plugin state mounted or synchronized into the VM

There are two realistic local providers:

The most important implementation question is not “can jackin use a VM?” It is:

  • what should run inside the VM
  • what should stay on the host
  • what parts of the current Docker contract need to survive inside the sandbox

For this project, the most practical mental model is:

  • keep using Dockerfiles and OCI images as the packaging format
  • change the isolation boundary from host containers to a VM
  • provide a private Docker-compatible engine inside that VM when the agent needs Docker workflows

The build path and the runtime path are separate decisions.

Build pathRun pathViable for jackin?Notes
Host DockerHost containerYesCurrent implementation
Host DockerMicroVMYesBest first prototype for microvm
VM-local DockerMicroVMYesCloser to Docker Sandboxes
Host DockerRemote Linux microVMYesUseful if local host cannot provide a microVM backend

The strongest short-term recommendation is:

  • keep host-side image builds in the first phase
  • run the resulting agent image inside a microVM
  • provide the private engine inside the VM boundary, not on the host

That gives a meaningful security improvement without redesigning the whole build pipeline on day one.

The cleanest microVM model for jackin is not “replace Docker with a VM”. It is:

  • run the agent inside the VM
  • run a private Docker-compatible engine inside the same VM
  • expose the workspace into the VM
  • mount or synchronize the persisted Claude/GitHub/plugin state into the VM

That means the VM should contain at least:

  • the agent runtime environment
  • Claude entrypoint support
  • a Docker-compatible daemon such as dockerd or possibly containerd + compatibility tooling
  • guest-local writable storage for engine state

This is the closest match to Docker Sandboxes’ architecture while still reusing jackin’s current agent image model.

Why Existing Agent Dockerfiles Still Matter

Section titled “Why Existing Agent Dockerfiles Still Matter”

Agent Dockerfiles are still useful because they define the userland environment:

  • language runtimes
  • development tools
  • shell environment
  • plugins and conventions

So the likely design is not “replace Dockerfiles with VM images.” It is:

  • Dockerfile builds agent filesystem/tooling layer
  • microVM provider decides how to execute that layer safely

In other words, the Dockerfile remains the environment definition, while the microVM becomes the runtime boundary.

Earlier iterations of this research investigated Kata Containers as a Linux provider and Apple Containerization as a macOS provider. Both are still real options in the abstract, but the April 2026 updates below replace them with two concrete, actively-maintained paths:

  • OrbStack with isolated machines for macOS — production-ready, shell-out to orb, scoped filesystem via v2.1.1+
  • libkrun (via smolvm or direct embedding) for cross-platform — open source, Rust-native, same ecosystem as Podman/crun

The Kata and Apple Containerization notes are not repeated here because:

  • Kata integrates naturally through containerd rather than Docker. It remains a credible Linux path if the libkrun direction does not work out, but requires jackin to adopt a containerd-oriented inner engine — a bigger architectural bet than the libkrun path, which keeps OCI images as the packaging format end to end. Kata’s Docker-in-guest storage caveat (virtio-fs not usable as the OverlayFS upper layer) is also a real constraint.
  • Apple Containerization (the Apple container CLI / containerization framework) is a credible macOS path, but since OrbStack’s v2.1.x --isolated mode now offers the sandboxing semantics the initial research was worried about, adding Apple Containerization as a third macOS option does not pay for its maintenance cost.

If either provider turns out to be necessary later, the backend abstraction below is deliberately shaped so that adding one more provider under the microvm umbrella is a mechanical change, not a design change.

It is possible to imagine jackin owning more of the VM runtime stack directly using technologies like raw KVM, Firecracker, Cloud Hypervisor, or Apple Virtualization.framework.

This is not the recommended first implementation path.

Why to defer it:

  • it pushes jackin toward becoming a sandbox runtime product rather than an operator CLI
  • it would require jackin to own more low-level VM orchestration concerns directly
  • libkrun and OrbStack already solve a meaningful portion of that work in their respective environments; embedding libkrun directly (Option B in the April 2026 research below) is a smaller commitment than building from raw hypervisor APIs

This should remain future research unless both the OrbStack provider and the libkrun-wrapper provider prove too limiting.

Docker Sandboxes combines three important ideas:

  • microVM isolation boundary
  • private Docker daemon inside the sandbox
  • host-side orchestration and policy layer around the sandbox

The closest jackin implementation would therefore be:

  • keep current agent image build flow at first
  • run the agent inside a microVM
  • provide the Docker-compatible engine inside that VM, not as a host-side sidecar
  • track sandbox instances independently of host Docker container naming

That still differs from Docker Sandboxes in one important way during the first phase:

  • initial jackin microvm mode will likely still build images on the host, while Docker Sandboxes keeps the main sandbox execution model fully inside the VM boundary

That is acceptable as an incremental architecture, but it should be described honestly.

The microvm user-facing mode routes to one of two providers depending on host. See the April 2026 research sections below for the full design of each.

TopicLinux / cross-platform microvmmacOS microvm
Recommended providerlibkrun via smolvm wrapperOrbStack isolated machine (v2.1.1+)
Best first integration styleshell out to smolvm CLIshell out to orb CLI
Longer-term integration styleembed libkrun directly via C ABI (or consume a future smolvm Rust crate)continue shell-out; optional Swift helper only if OrbStack limits become constraints
Agent image sourcehost-built OCI imagehost-built OCI image
Inner engine requirementnone (agent is the workload); add dockerd only if agent runs sibling containersVM-local dockerd (optional, per agent needs)
Workspace modelexplicit volume declared on VM launchselective --share on orb create --isolated
Host filesystem defaultinvisible unless declaredinvisible in isolated mode; full /Users in default mode (not used)
Main blockerlibkrunfw distribution; image→rootfs pipeline mechanicsno built-in host-side network proxy or credential injection
Best first milestoneexperimental microvm via smolvm on Linux + macOSexperimental microvm via orb --isolated on macOS
LicenseApache-2.0 (libkrun, smolvm)proprietary (OrbStack)

The current load_agent flow needs a backend seam.

Suggested responsibilities:

  • repo resolution and validation
  • image build
  • persisted agent-state preparation
  • backend launch/attach/list/eject

Suggested internal shape:

  • src/backend/mod.rs
  • src/backend/dind.rs
  • src/backend/microvm.rs

The project should stop treating Docker names as the source of truth.

Persist per-instance metadata such as:

  • stable instance ID
  • agent selector
  • backend kind
  • provider kind
  • workspace label
  • display name
  • backend-specific handle

This is required for mixed backends because VM instances will not naturally map to current Docker naming conventions.

Today workspaces assume direct bind mounts. A VM backend may need a different transport.

The design should introduce a backend-neutral concept such as:

  • direct bind mount
  • shared filesystem passthrough
  • synchronized directory

This does not require changing the user-facing workspace model immediately, but it does require changing the internal representation.

The product should define what an agent runtime is expected to provide.

At minimum:

  • shell execution
  • Claude entrypoint support
  • Git identity configuration
  • plugin bootstrap
  • Docker-compatible engine access inside the sandbox, if the backend promises Docker workflows

This prevents future backends from being “supported” in name but missing core behavior.

  • macOS: supported now through existing Docker environments
  • Linux: supported now
  • Windows/WSL2: possible but still secondary
  • macOS (Apple Silicon, macOS 14+): primary target is OrbStack isolated machine; libkrun via smolvm is the open-source alternative
  • macOS (Intel, macOS 11+): libkrun via smolvm only (OrbStack isolated-machine mode is not Intel-limited per se, but Apple Silicon is primary)
  • Linux + KVM: libkrun via smolvm
  • Linux without KVM: fallback to dind with a clear message
  • Windows: fallback to dind; neither libkrun upstream nor OrbStack target Windows

Scenario 1: Current Behavior, Explicitly Named

Section titled “Scenario 1: Current Behavior, Explicitly Named”

The operator uses:

Terminal window
jackin load agent-smith --sandbox-mode dind

Outcome:

  • current behavior preserved
  • runtime is clearly labeled as container-based

The operator uses:

Terminal window
jackin load the-architect --sandbox-mode microvm

On a Mac with OrbStack installed, jackin selects OrbStack and creates an --isolated machine with selective shares for the workspace and the jackin state directory.

Outcome:

  • agent runs inside a VM with a scoped filesystem
  • runtime output: backend: microvm (orbstack)

Scenario 3: Stronger Isolation on Linux or Mac without OrbStack

Section titled “Scenario 3: Stronger Isolation on Linux or Mac without OrbStack”

The operator uses:

Terminal window
jackin load the-architect --sandbox-mode microvm

On a Linux host with KVM, or on a Mac without OrbStack, jackin selects smolvm and boots the agent image directly under libkrun.

Outcome:

  • agent runs inside a libkrun microVM
  • runtime output: backend: microvm (smolvm)

The operator uses:

Terminal window
jackin load agent-smith --sandbox-mode auto

Outcome:

  • macOS with OrbStack: use microvm (orbstack)
  • macOS without OrbStack, Linux with KVM: use microvm (smolvm)
  • otherwise: fall back to dind with a clear message
  • extract a backend interface from the current runtime flow
  • add backend-neutral instance persistence
  • fix current launch metadata persistence bugs
  • harden DinD transport and cleanup behavior
  • introduce CLI/config support for --sandbox-mode
  • map dind to current behavior
  • keep microvm hidden or experimental until at least one provider works end to end

Phase 3: First MicroVM Provider — smolvm via libkrun

Section titled “Phase 3: First MicroVM Provider — smolvm via libkrun”
  • add src/backend/microvm_smolvm.rs that shells out to smolvm
  • package agent OCI image for smolvm consumption (local registry or docker-archive)
  • validate workspace semantics, attach/reconnect, and cold-start latency
  • document host requirements (KVM on Linux, macOS 14+ for HVF)

Phase 4: Second MicroVM Provider — OrbStack Isolated Machine

Section titled “Phase 4: Second MicroVM Provider — OrbStack Isolated Machine”
  • add src/backend/microvm_orbstack.rs that drives orb create --isolated with selective shares
  • route macOS hosts with OrbStack to this provider by default under auto
  • surface provider name in runtime output
  • update docs to describe dind vs microvm (orbstack|smolvm)
  • keep the security model blunt and accurate: neither microVM provider today matches Docker Sandboxes’ network-proxy / credential-injection layers
  • explain host support and fallback behavior clearly
  • selecting OrbStack’s default machine mode by accident and silently losing the scoped-filesystem property — the provider must always use --isolated
  • assuming smolvm’s shell-out surface is stable across its pre-1.0 releases without pinning
  • tying instance identity to Docker names when multiple backends exist
  • supporting a backend that cannot actually satisfy Docker-based agent workflows (libkrun+smolvm alone does not run sibling Docker containers without extra work)
  • claiming feature parity with Docker Sandboxes when the network-policy and credential-injection layers are not yet built

Alternatives Worth Mentioning in the Future Design

Section titled “Alternatives Worth Mentioning in the Future Design”

Good Linux-only hardening path for dind mode.

  • better than privileged DinD
  • not a microVM
  • useful if the product wants a stronger container boundary without VM orchestration

Good defense-in-depth option, but not the best primary answer for this feature.

  • stronger than plain containers
  • weaker than microVMs
  • not the clearest fit when nested Docker workflows are a core requirement
  • crun --krun: OCI runtime substitute for Podman. Relevant if jackin ever adopts a containerd/Podman-oriented flow; not relevant today.
  • krunvm: docker run-style CLI. Duplicates smolvm’s scope with less packaging polish.
  • krunkit: GPU-forward libkrun VMs on macOS. Out of scope unless agents need GPU acceleration.
  • muvm: Minimal launcher optimized for desktop apps in a VM. Scope mismatch for agent sandboxes.

Retained as a Linux contingency if the libkrun path hits an unexpected ceiling. Not pursued in the April 2026 plan because containerd-oriented integration is a bigger jackin-side change than the libkrun route.

Retained as a macOS contingency if OrbStack becomes unsuitable. Not pursued in the April 2026 plan because OrbStack v2.1.x now covers the sandboxing case and adding a third macOS provider does not pay for its maintenance cost.

The next design pass should turn this TODO into a full implementation design with:

  • exact CLI and config schema changes
  • exact backend trait/module structure
  • exact instance registry format
  • workspace transport model for VM providers
  • persistence policy for user state vs engine state
  • provider selection rules for Linux/macOS
  • rollout plan for experimental vs stable support

This section documents research into using OrbStack Linux VMs as a concrete, macOS-focused microvm backend. The section has been updated to reflect OrbStack 2.1.0 (April 2026) which shipped isolated machines without filesystem integration, and OrbStack 2.1.1 which added selective file sharing mounts in isolated machines. Those two features together close the most significant gap flagged in the initial write-up.

OrbStack collapses the macOS microVM story into a single, production-ready tool that already solves most of the hard problems: sub-2-second VM boot, scoped file sharing (as of v2.1.1), networking, and Docker-inside-VM.

The trade-off is that OrbStack is macOS-only, which means it cannot be the universal answer. But since the majority of jackin users are Mac developers, and the security comparison gap with Docker Sandboxes is most relevant on developer laptops, OrbStack is a pragmatic first target — and since v2.1.0 also a defensible one on security grounds.

OrbStack provides lightweight Linux VMs on macOS via the orb CLI.

Key properties relevant to jackin:

  • VM lifecycle: orb create, orb start, orb stop, orb delete — fully scriptable, no GUI needed
  • Supported distros: 15 distros including Debian (trixie confirmed working), Ubuntu, Alpine, Fedora, Arch, etc.
  • Boot time: ~2 seconds — comparable to container startup
  • Isolated machines (v2.1.0+): opt-in mode where a machine has no automatic filesystem integration with macOS; /Users is not auto-mounted and the host filesystem is invisible by default
  • Selective file sharing (v2.1.1+): inside an isolated machine, the operator can declare specific host paths to expose — the Docker Sandboxes scoped-mount model, but per-machine
  • Non-isolated (default) file sharing: host /Users/... auto-mounted at the same path inside the VM via VirtioFS
  • Networking: VM services auto-accessible at localhost; DNS at {name}.orb.local; full internet access (no built-in host-side proxy)
  • Docker inside VM: Standard apt install docker.io gives a full Docker daemon — no sidecar needed
  • Provisioning: Cloud-init support (orb create debian:trixie my-vm -c cloud-init.yml)
  • Resource overhead: Dynamic memory allocation — idle VMs use near-zero resources (v1.7.0+)
  • Command execution: orb -m {name} <command> runs commands inside the VM; SSH also available
  • Architecture: Apple Silicon primary; Intel Mac supported; Rosetta for x86_64 emulation
  • Additional security features: device-mapper encryption (LUKS, dm-verity, dm-integrity) support for guest disks (v2.0.2+), container/machine data sandboxing by macOS (v1.9.1+), trusted .orb.local certificates for in-container HTTPS (v1.9.0+)

The current DinD flow:

Host Docker → build image → create network → start docker:dind sidecar → start agent container
→ agent talks to DinD via DOCKER_HOST=tcp://dind:2375
→ workspace via bind mounts

The proposed OrbStack VM flow:

Host Docker → build image → export image as tar
→ orb create debian:trixie jackin-{name} -c cloud-init.yml
→ inside VM: install Docker, load image, start agent container
→ agent talks to Docker via unix:///var/run/docker.sock (real daemon, not sidecar)
→ workspace via OrbStack's automatic VirtioFS mount

Key simplification: inside the VM, there is no need for a DinD sidecar at all. The VM’s own Docker daemon is the isolated engine. The agent can use DOCKER_HOST=unix:///var/run/docker.sock instead of a TCP connection to a sidecar.

The concrete shape below assumes isolated machines with selective shares (v2.1.1+). Any jackin flow that silently creates a default (non-isolated) machine has effectively downgraded the security story and should be treated as a bug.

Terminal window
orb create debian:trixie jackin-{name} \
--isolated \
--share "${WORKSPACE}" \
--share "${HOME}/.jackin/data/jackin-{name}" \
-c /tmp/jackin-cloud-init.yml

The exact flag spelling for selective shares depends on what OrbStack 2.1.1 settles on; --share is used here as a placeholder for the v2.1.1+ mechanism. The important invariant is: only the workspace and the jackin data directory for this instance are exposed, nothing else under /Users.

Cloud-init template:

#cloud-config
packages:
- docker.io
runcmd:
- systemctl enable docker
- systemctl start docker
- usermod -aG docker debian

Without /Users auto-mount, the image tar must travel through an explicit share or through orb push:

Terminal window
# Build on host (reuse current flow)
docker build -t jackin-{slug} ...
# Export to the explicitly-shared jackin data dir
docker save jackin-{slug} -o ~/.jackin/data/jackin-{name}/image.tar
# Inside VM, load from the shared path
orb -m jackin-{name} docker load -i /Users/{user}/.jackin/data/jackin-{name}/image.tar

Alternative, without shares: orb push jackin-{name} ~/.jackin/cache/jackin-{slug}.tar /tmp/image.tar — explicit transfer, slower but makes the image-flow surface auditable.

Terminal window
orb -m jackin-{name} docker run -it --name agent \
-e DOCKER_HOST=unix:///var/run/docker.sock \
-e GIT_AUTHOR_NAME="{git_user_name}" \
-e GIT_AUTHOR_EMAIL="{git_user_email}" \
-e JACKIN=1 \
-v /Users/{user}/Projects/myapp:/Users/{user}/Projects/myapp \
-v /Users/{user}/.jackin/data/jackin-{name}/.claude:/home/agent/.claude \
-v /Users/{user}/.jackin/data/jackin-{name}/.claude.json:/home/agent/.claude.json \
-v /Users/{user}/.jackin/data/jackin-{name}/.config/gh:/home/agent/.config/gh \
jackin-{slug}

These -v paths must match the shares declared at orb create --isolated time; anything else will simply not exist inside the VM.

Terminal window
# Reattach to running agent
orb -m jackin-{name} docker attach agent
Terminal window
# Stop VM (preserves state for fast restart)
orb stop jackin-{name}
# Or delete entirely
orb delete jackin-{name}

Suggested User-Facing Name and Provider Selection

Section titled “Suggested User-Facing Name and Provider Selection”

An earlier iteration of this research proposed exposing orbstack-vm as a top-level backend name. The updated recommendation — aligned with the libkrun+smolvm research in the next section — is to keep microvm as the user-facing mode and treat the specific provider (OrbStack, smolvm) as an internal selection:

  • dind — current behavior, Docker-in-Docker sidecar (cross-platform)
  • microvm — provider chosen by jackin based on host: OrbStack isolated machine on Mac with OrbStack installed; libkrun via smolvm otherwise

Config shape:

[runtime]
default_sandbox_mode = "dind" # or "microvm" or "auto"
# optional provider override
microvm_provider = "auto" # or "orbstack" or "smolvm"

CLI shape:

Terminal window
jackin load agent-smith --sandbox-mode dind
jackin load agent-smith --sandbox-mode microvm
jackin load agent-smith --sandbox-mode microvm --microvm-provider orbstack
jackin load agent-smith --sandbox-mode auto

Runtime output:

  • backend: dind
  • backend: microvm (orbstack, debian trixie)
  • backend: microvm (smolvm)

This keeps the operator UX stable while letting jackin pick the right provider for the host. The provider name is visible in the runtime output so operators can confirm what they got.

Architecture Comparison: DinD vs OrbStack Isolated Machine

Section titled “Architecture Comparison: DinD vs OrbStack Isolated Machine”
AspectDinD (current)OrbStack isolated machine (proposed)
Isolation boundaryContainer (shared host kernel, --privileged DinD)Full VM (separate kernel, separate userland)
Docker accessTCP sidecar (tcp://dind:2375), no TLSNative daemon (unix:///var/run/docker.sock)
Workspace deliveryDirect host bind mountsSelective --share into isolated machine (v2.1.1+)
State persistenceHost dirs mounted into containerExplicitly-shared host dirs mounted into container inside VM
Container escape riskPrivileged sidecar = root-equivalent host accessEscape stays inside VM boundary
Startup timeSeconds (container pull + DinD readiness)~2s VM boot + ~30-60s first-time provisioning
PlatformmacOS, Linux, WSL2macOS only
External dependencyDocker (already required)Docker + OrbStack

Docker Sandboxes provides the most complete agent sandbox model available today. sbx itself does not require Docker Desktop, although custom template builds do. Understanding that split is essential for honest positioning of any jackin backend: the useful comparison is the sandbox architecture and operator surface, not an assumption that every Docker-owned capability maps to jackin.

Docker Sandboxes implements four distinct isolation layers:

  1. Hypervisor isolation: Each sandbox runs in a lightweight microVM with its own Linux kernel. Uses Apple Virtualization.framework on macOS, Hyper-V on Windows. Processes inside the VM are invisible to the host and other sandboxes.

  2. Filesystem isolation: Only the declared workspace directory is shared via filesystem passthrough. The workspace is mounted at the same absolute path inside the sandbox. Symlinks pointing outside the workspace scope are not followed. The rest of the host filesystem is completely invisible.

  3. Network isolation: All HTTP/HTTPS traffic routes through a host-side proxy. Raw TCP, UDP, and ICMP are blocked at the network layer. Traffic to private IPs, loopback, and link-local addresses is prohibited. Sandboxes cannot reach each other or the host’s localhost. Only domains explicitly listed in network policies are reachable.

  4. Credential isolation: The host-side proxy intercepts outbound API requests and injects authentication headers (API keys, tokens). Credential values never enter the VM. The proxy acts as a MITM for HTTPS, terminating TLS and re-encrypting with its own CA, allowing policy enforcement and credential injection.

Each sandbox also gets its own dedicated Docker Engine, completely isolated from the host Docker daemon. The agent cannot mount the host Docker socket.

Terminal window
# Basic usage
docker sandbox run claude .
# With extra read-only workspaces
docker sandbox run claude ~/project-a ~/shared-libs:ro ~/docs:ro
# Named sandbox
docker sandbox run --name my-project claude .
# Custom base image
docker sandbox run --template python:3-alpine claude .
# With agent arguments
docker sandbox run claude . -- -p "What version are you running?"
# Lifecycle
docker sandbox ls
docker sandbox rm my-project

Workspaces are mounted at the same absolute path as on the host. Additional workspaces can be appended as arguments with optional :ro suffix.

Side-by-Side: Docker Sandboxes vs OrbStack VM (Isolated Machine Mode)

Section titled “Side-by-Side: Docker Sandboxes vs OrbStack VM (Isolated Machine Mode)”
Isolation LayerDocker SandboxesOrbStack isolated machine (v2.1.1+)Gap
VM boundarymicroVM per sandbox, own kernelFull Linux VM per agent, own kernelEquivalent
Filesystem scopeOnly declared workspace shared; symlinks outside scope blockedNo auto-mount; selective per-machine shared folders (v2.1.1+)Essentially equivalent
NetworkHTTP/HTTPS only via host proxy; raw TCP/UDP/ICMP blocked; domain allowlistFull unrestricted network access; iptables in-guest is operator-configurableSignificant gap
CredentialsHost proxy injects auth headers; keys never enter VMKeys passed as env vars or mounted files; they enter the VMSignificant gap
Inner DockerSeparate dockerd per sandbox; no host socket accessSeparate dockerd in VM; no host socket accessEquivalent
PlatformmacOS + WindowsmacOS onlyMinor gap
BackendSecurity model
dind (current)Container isolation with privileged sidecar — weakest boundary
OrbStack default machineVM kernel boundary, but full /Users auto-mount — strong kernel isolation, weak filesystem isolation
OrbStack isolated machine (v2.1.0+) with selective shares (v2.1.1+)VM kernel boundary + scoped filesystem — comparable to Docker Sandboxes on boundary and filesystem, still missing network proxy and credential injection
Docker SandboxesVM + scoped filesystem + network proxy + credential isolation — strongest

The OrbStack VM backend provides a real and meaningful security improvement over DinD. The --privileged flag on the current DinD sidecar effectively gives the agent root-equivalent access to the host kernel. With OrbStack VMs, even a container escape stays inside the VM boundary.

As of v2.1.1 the filesystem-scope gap that previously distinguished OrbStack from Docker Sandboxes is substantially closed for any jackin flow that opts into --isolated and declares shares explicitly. The remaining gaps are network policy and credential injection, not VM isolation or filesystem scope.

Filesystem: Default Machines Still Auto-Mount /Users

Section titled “Filesystem: Default Machines Still Auto-Mount /Users”

The default (non-isolated) OrbStack machine mode still auto-mounts the macOS /Users directory via VirtioFS. For jackin’s use case this is not the right default: agents should not see every project on the host just because they happen to boot in a VM.

The practical resolution, as of April 2026, is to always create jackin’s machines with --isolated and then selectively share only the active workspace:

  • v2.1.0 (April 19, 2026): “Isolated machines without filesystem integration” — an isolated machine has no automatic /Users mount and the host filesystem is not visible
  • v2.1.1 (April 20, 2026): “Selective file sharing mounts in isolated machines” — inside an isolated machine, specific host paths can be declared as shares

Historical context (kept for audit trail — these are now resolved upstream by OrbStack v2.1.x, so the roadmap no longer links to the old tracking issues):

  • orbstack/orbstack#169 (2023) — the long-standing request that tracked this. The feature shipped as “isolated machines” + “selective shares” in v2.1.x.
  • orbstack/orbstack#1243 (2024) — closed as duplicate.
  • orbstack/orbstack#2308 (January 2026) — superseded by v2.1.x.

Implication for jackin: the orbstack provider should default to --isolated and reject any flow that would fall back to the default machine mode, unless the user opts out explicitly. The umount /Users workaround is no longer needed.

OrbStack VMs have full unrestricted network access by default, in both regular and isolated modes. A compromised agent could exfiltrate data to any endpoint. Partial mitigation is possible via iptables rules configured inside the VM through cloud-init, but this is not the same defense-in-depth as Docker Sandboxes’ host-side HTTPS proxy with domain allowlist.

This gap is unchanged by v2.1.x and is one of the two remaining structural differences with Docker Sandboxes.

Without a host-side proxy to inject credentials, API keys and tokens must be passed as environment variables or mounted files. A compromised agent has direct access to them.

Building a proxy similar to Docker Sandboxes’ approach would be a significant project. Short-term, credentials in the VM is the pragmatic path. This gap is also unchanged by v2.1.x.

OrbStack’s CPU and memory limits are global across all VMs, not per-machine. A runaway agent in one VM could affect others. This is acceptable for developer workstations but worth documenting.

OrbStack does not run on Linux. Linux users need a different microvm provider — the libkrun + smolvm track documented in the next section.

After v2.1.x, the residual gaps between the OrbStack isolated-machine backend and Docker Sandboxes are fewer but still real:

  1. Filesystem scope (was critical, now substantially closed): Addressed by --isolated + selective shares. Any open issue here is ergonomic (how jackin declares shares at machine-create time) rather than structural.

  2. Network policy (significant): jackin could configure iptables inside the VM via cloud-init to restrict outbound traffic. A basic allowlist of required domains (GitHub, Claude API, npm/PyPI registries) would cover the most important cases. Still weaker than a host-side HTTPS proxy because enforcement lives inside the guest.

  3. Credential injection (significant): A lightweight host-side proxy that intercepts outbound HTTPS from the VM and injects auth headers is a longer-term project. It would require intercepting traffic at the OrbStack network layer or configuring the VM to route through a local proxy.

  4. Per-VM resource limits: Not solvable at the jackin level while OrbStack only supports global limits.

  1. VM lifecycle strategy: Create/delete per session, or keep VMs alive and stop/start? Keeping alive is faster but consumes disk. Recommendation: stop/start by default, delete on explicit jackin eject --purge.

  2. Image transfer optimization: docker save + load through shared filesystem is simple but slow for large images. Could explore running a local registry that the VM pulls from, or using OrbStack’s Docker integration for image sharing.

  3. Attach UX: Currently docker attach gives the terminal directly. With OrbStack VM, it becomes orb -m jackin-{name} docker attach agent — an extra layer. Need to verify that Ctrl+P,Q detach works through the orb wrapper.

  4. State persistence: Should ~/.jackin/data/{name} be mounted into the VM’s agent container via VirtioFS (same path), or should state live inside the VM? VirtioFS mount is simpler and consistent with current behavior.

  5. Multiple agents: One VM per agent (cleaner isolation, higher resource use) or one shared VM (more efficient, weaker isolation between agents)?

  6. First-launch latency: First VM creation downloads a distro image (~1 min) and cloud-init installs Docker (~30-60s). Mitigations: pre-warm a jackin VM template, or keep VMs alive across sessions.

Topicdindmicrovm via OrbStack (isolated, v2.1.1+)
PlatformmacOS, Linux, WSL2macOS only
External dependencyDockerDocker + OrbStack 2.1.1+
Isolation boundaryContainer (privileged)VM (Apple VZ), own kernel
Inner DockerTCP sidecarNative daemon in VM
Workspace modelHost bind mountSelective shares into isolated machine
Host filesystem exposureFull accessOnly explicitly declared shares
Network policyNoneIn-guest iptables (operator-configurable)
ProvisioningNone neededCloud-init
VM distroN/ADebian trixie (recommended)
First integration styleCurrent code, refactoredShell out to orb CLI
Main residual blockerNone (already working)No host-side network proxy or credential injection
Best first milestoneRefactored into backend traitExperimental OrbStack provider under microvm mode

libkrun Foundation and smolvm Wrapper (April 2026)

Section titled “libkrun Foundation and smolvm Wrapper (April 2026)”

This section supersedes earlier research into Kata Containers, Apple Containerization, and auser/mvm. It narrows the microvm backend story to two concrete, currently-maintained options: OrbStack with isolated machines (covered in the section above) and a libkrun-based wrapper (covered here). Both target the same product outcome; they differ on platform reach, openness, and engineering commitment.

libkrun is a dynamic library that adds a minimal Virtual Machine Monitor to a host process and runs workloads inside it. It is not itself a CLI or a product — it is a library (libkrun.so / libkrun.dylib) that other tools embed to get microVM capabilities without reimplementing the hypervisor integration.

For jackin this is the right level of abstraction to care about because:

  1. It is the upstream that a growing cluster of tools converge on. crun --krun, krunvm, krunkit, muvm, and smolvm all wrap libkrun. Choosing a libkrun-based wrapper today does not lock jackin into one small project; it plugs into a whole ecosystem that has already done the hypervisor portability work.

  2. It already covers the platforms jackin cares about. KVM on Linux and HVF on macOS (ARM64, macOS 14+) are in upstream. Windows is not — libkrun-based wrappers inherit that.

  3. It is open source (Apache-2.0). Maintained under the containers GitHub organization (Podman / crun / Buildah ecosystem), with active releases — libkrun-1.17.4 shipped February 18, 2026.

  4. It is Rust. 91% Rust, with a stable C ABI for language interop. Embedding it from jackin would be a supported path, not a workaround.

libkrun integrates code from Firecracker, rust-vmm, and Cloud Hypervisor into a single linkable library that exposes a small C API and several virtio devices:

  • Guest init: boots a custom guest via a statically-linked C init binary (not a full systemd distro)
  • Storage: virtio-fs for host directory passthrough; virtio-block for block-device rootfs
  • Networking: two mutually exclusive modes —
    • virtio-vsock + TSI (Transparent Socket Impersonation): guest TCP/UDP syscalls are transparently redirected through vsock to a host-side proxy — no guest network interface, no NAT, no bridges; requires libkrun’s kernel (libkrunfw)
    • virtio-net + passt / gvproxy: conventional virtual NIC with a userspace network proxy on the host
  • Guest communication: virtio-vsock end-to-end
  • Security variants: generic, AMD SEV/SEV-ES/SEV-SNP (libkrun-sev), Intel TDX (libkrun-tdx), macOS EFI (libkrun-efi)
  • Optional devices: virtio-gpu with Venus/native-context acceleration, virtio-balloon with free-page reporting, virtio-rng, virtio-snd, virtio-console

The novel piece for sandbox use cases is TSI. It reframes guest networking as host-mediated syscall forwarding, which means the host can apply per-guest network policy (allowlists, port whitelists, TLS termination) without guest cooperation and without exposing a routable network to the guest at all. For agent sandboxes this is the closest open-source analog to Docker Sandboxes’ host-side HTTPS proxy model.

libkrun does not by itself know how to:

  • turn an OCI image into a bootable guest filesystem
  • manage lifecycle (start/stop/restart/list)
  • deliver workspaces, secrets, or state into the guest
  • provide an interactive PTY from the host side

Every real user consumes libkrun through a wrapper that fills those gaps. The wrappers divide roughly by purpose:

WrapperPurposeScope
crun --krunOCI runtime substitute for Podman/BuildahDrop-in container-engine backend
krunvmdocker run-style CLI for ad-hoc VMsDeveloper-facing ad-hoc VMs
krunkitGPU-forward libkrun VMs on macOSGraphics/AI workloads
muvmMinimal userspace microVM launcherDesktop Linux apps inside a VM
smolvmPortable, OCI-consuming VM runtime with packaged .smolmachine filesGeneral-purpose portable VMs

For jackin the candidate wrappers reduce to smolvm or a jackin-specific wrapper built directly on the libkrun C API. crun --krun is tied to Podman’s container workflow, which jackin does not use. krunvm and muvm are closer to developer conveniences than runtimes jackin would embed. krunkit solves a different problem.

smolvm from the smol-machines organization is the closest off-the-shelf analog to what jackin’s microvm backend would need to do. It sits on top of libkrun (plus libkrun’s custom kernel libkrunfw), adds an OCI image pipeline, and wraps the whole thing in a CLI.

Relevant properties:

  • Host platforms: macOS 11+ on Apple Silicon and Intel; Linux x86_64 and aarch64 with KVM
  • Guest: Linux matching the host arch, booted from OCI images pulled from Docker Hub / GHCR
  • Boot time: under 200 ms cold start
  • Packaging: .smolmachine file format — a single-file, stateful VM artifact that can be moved between hosts
  • Volumes: directory mounts only (no single-file mounts); host paths declared per-invocation
  • Networking: opt-in via --net, TCP/UDP only (no ICMP), host egress allowlist supported, SSH agent forwarding from host
  • SSH keys: private keys stay on the host; smolvm forwards the agent socket rather than copying keys in
  • License: Apache 2.0
  • Language: Rust (83%), shell (15%), TypeScript (2%)
  • Maintenance: latest release v0.5.19 on April 18, 2026; 41 releases; 515 commits on main; 2.3k stars

smolvm is explicitly positioned as a runtime, not a platform — “one CLI, OCI in, VM out”. That scope alignment matters: jackin can shell out to smolvm for the microVM plumbing and keep all the agent/workspace/plugin concepts in jackin’s own layer.

jackin treats smolvm as an external CLI, similar to how the current implementation shells out to docker.

// pseudocode
Command::new("smolvm")
.arg("run")
.args(["--image", &agent_image])
.args(["--volume", &format!("{ws}:{ws}")])
.args(["--net", "allowlist"])
.args(["--allow", "api.anthropic.com"])
.args(["--allow", "github.com"])
.arg(&instance_name)
.spawn()?;

Pros:

  • lowest implementation cost — reuses smolvm’s lifecycle, image handling, and OCI fetching
  • no FFI, no C-API bindings, no libkrun kernel management
  • tracks smolvm’s upstream improvements for free

Cons:

  • inherits smolvm’s CLI and image model even when jackin would prefer a different shape
  • lifecycle/attach semantics are constrained by what the CLI exposes
  • adds a non-Rust dependency surface (binary install) on the user

This is the right shape for the first prototype. It validates the libkrun approach end-to-end without committing jackin to hypervisor-adjacent code.

Option B: Embed libkrun directly via its C API

Section titled “Option B: Embed libkrun directly via its C API”

jackin links against libkrun and libkrunfw and drives the VM lifecycle itself.

Pros:

  • full control over guest boot, device configuration, TSI policy, and PTY wiring
  • can align the sandbox model exactly with jackin’s agent/workspace contract
  • removes the smolvm CLI dependency and ships as a single Rust binary

Cons:

  • substantial engineering: C-ABI bindings, kernel distribution (libkrunfw ships as a shared object too), root-filesystem assembly, guest init contract
  • jackin effectively becomes a libkrun wrapper, which broadens the maintenance footprint
  • no existing OCI-to-libkrun-rootfs pipeline in Rust — would need to build one or borrow from smolvm

This is the right shape for a long-term primary backend, only if the first prototype proves the approach is right and the shell-out boundary becomes the bottleneck.

Option C: Track smolvm for an embeddable Rust SDK

Section titled “Option C: Track smolvm for an embeddable Rust SDK”

smolvm is 83% Rust and organized as a Cargo workspace. It currently ships a CLI, not a stable library crate. If the smol-machines organization publishes an embeddable smolvm-core (similar to what auser/mvm attempted), jackin could depend on it directly — getting the convenience of a prebuilt OCI-to-microVM pipeline with the control of in-process Rust APIs.

This is not available today. It is worth monitoring as a middle path between Option A and Option B.

Wrapping It Up: Rootfs and Agent Image Flow

Section titled “Wrapping It Up: Rootfs and Agent Image Flow”

The current jackin agent image is an OCI image built with host Docker. smolvm’s OCI-consuming flow preserves that contract directly:

[Host] docker build → jackin-{slug} image (OCI)
[Host] docker push → local registry (or use docker-archive)
[Host] smolvm run jackin-{slug} --volume {ws}:{ws} --net allowlist → VM
[VM] guest init → jackin entrypoint → Claude

This removes the entire “inner Docker daemon” question that previous research variants (OrbStack VM, mvm) had to solve. If the agent itself does not need to run sibling Docker containers, libkrun-based wrappers do not need an inner dockerd at all — the agent is the single workload in its own VM. That is a meaningful simplification over the DinD and OrbStack-VM flows.

If an agent does need sibling Docker workflows (uncommon but real), the options are:

  1. run a nested dockerd inside the guest (the same trade-off DinD already has)
  2. expose a jackin-managed sidecar VM for the sibling workload
  3. decline Docker-in-microVM for those agents and recommend they run under the dind backend

For the first phase, option 3 is acceptable.

Side-by-Side: libkrun/smolvm vs OrbStack (with Isolated Machines) vs Docker Sandboxes

Section titled “Side-by-Side: libkrun/smolvm vs OrbStack (with Isolated Machines) vs Docker Sandboxes”
Aspectlibkrun + smolvmOrbStack (isolated machine)Docker Sandboxes
PlatformmacOS 11+, Linux (KVM), aarch64/x86_64macOS onlymacOS + Windows
Hypervisorlibkrun over KVM / HVFApple Virtualization.frameworkApple VZ / Hyper-V
Filesystem modelExplicit directory volumes onlyExplicit shared folders (v2.1.1+)Declared workspace(s) only
Network modelOpt-in; TSI + host allowlistStandard VM networkingHost proxy with domain allowlist
Credential modelEnv/volumes; SSH agent forwarded, keys stay on hostEnv/volumes; keys enter VMProxy-injected headers; keys never enter VM
Inner Docker required?No (agent is the workload)No (per-machine dockerd if needed)No (separate engine per sandbox)
Guest communicationvsock (via smolvm / libkrun)orb CLI / SSHDocker SDK / proxy
Image formatOCI directlyStandard distro + cloud-initOCI directly
Dependencysmolvm CLI, libkrun, libkrunfwOrbStack app (commercial)Docker Desktop (commercial)
LicenseApache 2.0 (all)ProprietaryProprietary
Maturity for jackinNewer; requires integration workProduction-ready; shell-out is trivialMature; tightest model but not embeddable

Why the Landscape Actually Shows Two Viable Paths

Section titled “Why the Landscape Actually Shows Two Viable Paths”

The honest reading of the April 2026 landscape for jackin is that OrbStack and libkrun-based wrappers are not competing for the same slot — they are two different answers to two different questions:

  • If the goal is “give Mac users a stronger local sandbox tomorrow, with minimal jackin engineering”: OrbStack isolated machines. Production-ready, the security gap that previously ruled it out is closed (v2.1.0 / v2.1.1), integration is shell-out to orb, and users who already have OrbStack installed get it for free.
  • If the goal is “give every user on every platform a stronger sandbox that jackin owns end-to-end”: libkrun via smolvm, then eventually directly. Open source, Apache 2.0, cross-platform, same language as jackin, and aligned with the broader container/VM ecosystem (crun, Podman, Buildah) rather than a single commercial product.

Those two paths can coexist in the backend abstraction. The microvm user-facing mode can route to orbstack on macOS hosts that have it, and to smolvm on Linux and on macOS hosts that don’t. Users pick the mode; jackin picks the provider.

BackendIsolationFilesystemNetworkCredentialsRating
dindContainer (privileged sidecar)Full host via bind mountsUnrestrictedIn containerBasic
microvm via OrbStack isolated machineVM (Apple VZ)Only declared shared foldersUnrestricted by defaultEnv/volumes enter VMStrong (macOS)
microvm via libkrun/smolvmVM (KVM/HVF)Only declared volumesTSI + host allowlistEnv/volumes enter VM; SSH keys stay on hostStrong (cross-platform)
Docker SandboxesVM (Apple VZ / Hyper-V)Only declared workspaceHost HTTPS proxy, domain allowlistHost proxy injects, never in VMStrongest

Both microvm provider paths close the two biggest DinD gaps (shared kernel, privileged sidecar). Neither matches Docker Sandboxes’ credential injection today. The libkrun path has a more plausible route to closing the network-policy gap because TSI is a host-side syscall mediator by design — the host could implement the proxy behavior itself, independent of guest cooperation. That is a future project, not a first-phase commitment.

Revised Implementation Phases (libkrun + smolvm track)

Section titled “Revised Implementation Phases (libkrun + smolvm track)”
  • Extract a backend trait from the current runtime flow
  • Move current code into src/backend/dind.rs
  • Introduce --backend CLI flag: dind, microvm, auto
  • Define a backend-neutral instance registry (not keyed on Docker container names)

Phase 2: First microvm Provider — smolvm Shell-Out

Section titled “Phase 2: First microvm Provider — smolvm Shell-Out”
  • Detect smolvm availability (smolvm --version) and libkrun/libkrunfw host requirements
  • Implement a src/backend/microvm_smolvm.rs that shells out to smolvm run / stop / attach
  • Image flow: reuse current Docker build, either push to a local registry smolvm can pull from or export via docker-archive and feed to smolvm
  • Workspace: declared as an explicit volume at the same absolute path
  • Attach: validate smolvm’s attach semantics for an interactive Claude session

Phase 3: Second microvm Provider — OrbStack Isolated Machine

Section titled “Phase 3: Second microvm Provider — OrbStack Isolated Machine”
  • Implement src/backend/microvm_orbstack.rs that drives orb create --isolated and sets up selective shared folders
  • Reuse the same backend-neutral instance registry
  • Surface the backend choice in runtime output: backend: microvm (orbstack) / backend: microvm (smolvm)
  • auto picks OrbStack when the host has it and the user is on macOS, otherwise smolvm, otherwise falls back to dind with an explicit message
  • Allow explicit provider override for users who want to force one or the other
  • smolvm provider: enable host egress allowlist by default with a sensible baseline (Anthropic API, GitHub, language package registries)
  • OrbStack provider: default to --isolated with explicit --share entries; never auto-mount /Users
  • Document that neither backend currently matches Docker Sandboxes’ credential proxy
  • Evaluate promoting the smolvm provider from shell-out to direct libkrun integration (Option B) if the CLI boundary becomes a product constraint
  • Track smol-machines for an embeddable Rust crate (Option C)
  • Investigate a host-side HTTPS proxy built on top of TSI to close the credential-injection gap
  1. Image transfer mechanics for smolvm. Is the cleanest path a local OCI registry (jackin runs one on the host for VMs to pull from) or docker-archive + side-channel import? A local registry is simpler for users but adds a background service.
  2. Interactive attach semantics. smolvm’s attach/exec behavior needs to be validated for Claude’s interactive session — specifically Ctrl+P/Ctrl+Q detach, window resize forwarding, and reconnect.
  3. macOS host matrix. smolvm requires macOS 11+; libkrun’s HVF backend requires macOS 14+. The effective minimum for jackin’s libkrun track is macOS 14+.
  4. libkrunfw distribution. The custom guest kernel ships as a shared library. Packaging this for end users (Homebrew tap? bundled with jackin?) is an open question for direct-integration paths.
  5. Outer-DinD agents. Some current agent images rely on nested Docker. Is the answer to route those to dind backend only, or to invest in nested dockerd inside the microVM? First-phase recommendation: route them to dind.