Skip to content

Runtime Instance Model

The runtime instance model is the source of truth for how jackin names containers, stores per-instance metadata, decides whether a launch should restore or start fresh, and exposes running instances to hardline, eject, purge, and the console.

TermMeaning
RoleThe reusable tool profile selected by a role key such as agent-smith or chainargos/backend-engineer. A role repo owns the Dockerfile, manifest, tools, runtime support, and plugins.
InstanceOne jackin-managed runtime environment created from a role for a workspace or ad-hoc directory. In the Docker backend, the instance is represented by a role container, a DinD sidecar, a network, a cert volume, and jackin-managed state.
Agent runtimeOne supported CLI runtime: claude, codex, amp, or opencode.
Agent sessionOne foreground agent process inside an instance. The initial session is created by load; additional sessions are created by hardline --new.
Durable agent homeThe per-instance home/state tree mounted into the role container so agent history, runtime-local config, and auth handoff survive container recreation.
Recoverable stateLocal jackin-managed state that can rebuild or reconnect an instance after Docker resources disappear. Container writable-layer changes are not recoverable.
  • Every fresh jackin load claims a new DNS-safe instance base name. Running or stopped instances never block a fresh launch.
  • Missing recoverable state can block or prompt before a fresh launch because the operator may need to choose between restoring existing work and superseding it.
  • hardline owns reconnecting, restarting, rebuilding, inspecting, and starting additional sessions inside an existing instance.
  • load owns creating a fresh runtime instance.
  • The per-instance manifest is canonical for identity and lifecycle state; the global instance index is rebuildable lookup acceleration.
  • Restore is limited to host-backed state: workspace mounts, isolated worktrees/clones, the per-instance durable agent home, and jackin-managed metadata.
  • Docker containers, networks, cert volumes, DinD images, and role-container writable-layer mutations are reconstructible or disposable plumbing.
  • Routine operator flows should not require raw Docker names; instance IDs and console rows are the human-facing handles.

Fresh launches use DNS-safe base names:

jk-<unique-id>-<workspace-name>-<agent-role>
jk-<unique-id>-<agent-role>

The first form is for saved workspaces. The second form is for ad-hoc launches. The jk- prefix identifies all jackin-managed Docker resources. The unique ID comes second so all resources from the same session share a common prefix. Workspace and role components are compact lowercase ASCII alphanumeric strings derived by stripping non-alphanumeric characters (hyphens, slashes, underscores) from the source names. The unique ID makes concurrent launches distinct.

The total base name fits within 58 characters so that <base>-dind stays within the 63-character DNS label limit. If the compacted workspace and role components together exceed the available budget they are truncated with a 4-character SHA-256 suffix to keep names deterministic.

Derived Docker resources use the base name:

role container: <base>
dind container: <base>-dind
network: <base>-net
cert volume: <base>-dind-certs

DNS-sensitive values such as DOCKER_HOST, JACKIN_DIND_HOSTNAME, DOCKER_TLS_SAN, NO_PROXY, no_proxy, and TESTCONTAINERS_HOST_OVERRIDE use the DNS-safe DinD name. The role-container base must fit the DinD label budget because <base>-dind is used as a network hostname.

Implementation lives in src/instance/naming.rs and src/runtime/naming.rs. The full naming reference — including image names, derived resource names, lock files, roles cache layout, CLI selector formats, and worked examples — is in Instance and Resource Naming.

The important instance state lives under jackin’s data directory:

data/
├── instances.json
└── <container-base>/
├── home/
├── state/
├── claude/
├── codex/
├── amp/
├── opencode/
├── .config/gh/
└── .jackin/
├── instance.json
└── isolation.json

instance.json stores instance-level identity and lifecycle data: instance ID, container base, workspace name/label, workdir, host workdir fingerprint, role key, role display name, agent runtime, role source/ref, image tag, lifecycle status, last attach outcome, and Docker resource names.

instances.json is a global lookup index. It stores enough data to find candidate instances by workspace/directory/role/agent without walking every state directory. If the index is missing, jackin rebuilds it by scanning manifests. The manifest remains canonical.

isolation.json is mount-specific. It records materialized isolated mount state such as original source, mount destination, worktree/clone path, scratch branch, base commit, selector key, container name, and preservation status. It is owned by the isolation subsystem, not the instance manifest.

Implementation lives in src/instance/manifest.rs, src/isolation/state.rs, and src/instance/mod.rs.

The instance manifest uses these lifecycle statuses:

StatusMeaning
activeLaunch started and has not finalized. If the process dies mid-launch, this may become recoverable after Docker reconciliation.
runningRole container has been observed running.
clean_exitedForeground session ended cleanly and no preserved isolated state remains.
crashedRole container exited non-zero or was OOM-killed.
preserved_dirtyAn isolated worktree/clone has uncommitted changes after foreground finalization.
preserved_unpushedAn isolated worktree/clone has commits not pushed or not merged back after foreground finalization.
restore_availableDocker resources are gone, but local state remains recoverable.
failed_setupLaunch setup failed before a usable foreground session.
supersededOperator chose a fresh launch instead of restoring this instance.
purgedState was intentionally removed or tombstoned.

Status writes should update both instance.json and instances.json through the manifest/index helpers.

jackin load performs restore discovery before claiming a fresh name. The decision separates Docker-present instances from missing recoverable state:

Candidate Docker stateCandidate manifest stateFresh load behavior
Running role containerAny non-terminal stateIgnore for fresh-load gating; keep discoverable through hardline and console.
Stopped role containerAny non-terminal stateIgnore for fresh-load gating; hardline owns restart/reconnect decisions.
Docker inspect unavailableAny relevant stateSurface Docker-specific diagnostics where the current path needs reliable Docker state.
Missing containeractive, running, crashed, preserved_dirty, preserved_unpushed, restore_available, failed_setupTreat as a restore candidate. Interactive load prompts; non-interactive load fails with guidance when a choice is needed.
Missing containerclean_exited, superseded, purgedIgnore.

Matching role/agent restore candidates can be rebuilt through the load prompt. Related role/agent candidates in the same workspace or directory can be recovered or rebuilt in place from the saved manifest source/ref. Running related instances are not blockers; they are active parallel work.

Implementation lives in src/runtime/launch.rs (resolve_restore_candidate, related restore helpers, name claiming, and launch orchestration).

jackin hardline targets an existing instance by current directory, role selector, instance ID, or full container base. It inspects the role container and chooses the reconnect path:

Container stateHardline behavior
RunningAttach to the live primary foreground session, or start another foreground session when --new is supplied.
Stopped, exit 0Treat as a completed session; use load for a fresh instance.
Stopped, non-zero or OOMRestart in place and attach. If DinD is stopped, restart it first. If DinD/network/certs are missing, recreate them around the existing role container before restart.
Missing with indexed stateRebuild the runtime around jackin-managed local state when the workspace or ad-hoc directory can be resolved.
Missing without indexed stateFail with a clear “nothing to reconnect to” path.

hardline --inspect is read-only and should report manifest state even when Docker is unavailable. When Docker can inspect a running role container, it also reports live agent-session inventory from docker top.

hardline --new starts another foreground agent process inside a running indexed instance. Today this is not a reconnectable named secondary session; Console agent session control owns the tmux-backed session substrate that will make secondary sessions independently reconnectable.

Implementation lives in src/runtime/attach.rs and the hardline dispatch paths in src/app/mod.rs.

The console reads the instance index and shows active/recoverable instances for the selected workspace or current directory. Instance rows expose:

  • reconnect/recover
  • new foreground agent session
  • read-only inspect
  • guarded purge

The console should treat the index as a discoverability source, not as proof that Docker resources still exist. Refresh paths reconcile local state with Docker when an action needs live container data.

Implementation lives in src/console/manager/state.rs, src/console/manager/render/list.rs, and src/console/manager/input/list.rs.

Persisted:

  • agent conversation history and runtime-local settings under the per-instance durable home
  • agent auth handoff state for supported runtimes, according to the resolved auth mode
  • GitHub CLI state stored for the instance
  • isolated worktree/clone state and metadata
  • instance manifest, index entry, and last attach outcome

Not persisted:

  • packages installed interactively into the role container filesystem
  • files written only to the role container writable layer outside mounted paths
  • DinD images, containers, volumes, and build cache
  • Docker networks and TLS cert volumes after Docker-side deletion

This boundary is deliberate. Durable tools belong in the role Dockerfile. Durable work belongs in mounted paths or jackin-managed per-instance state.

eject stops/removes Docker resources for an instance while preserving local recovery state unless purge is requested. purge deletes local recovery state and is guarded when matching Docker resources still exist. exile stops all running jackin-managed role containers.

Plain purge must refuse while the role container or DinD sidecar still exists, whether running or stopped. This prevents deleting local state that an existing Docker resource may still depend on. eject --purge is the combined path when the operator wants Docker resources and local recovery state removed together.

Implementation lives in src/runtime/cleanup.rs.

AreaSource
DNS-safe names and instance IDssrc/instance/naming.rs
Docker labels and derived resource namessrc/runtime/naming.rs
Instance manifest and rebuildable indexsrc/instance/manifest.rs
Launch, restore discovery, name claimingsrc/runtime/launch.rs
Attach, inspect, session inventorysrc/runtime/attach.rs
Eject, purge, orphan cleanupsrc/runtime/cleanup.rs
Workspace/materialized mount statesrc/isolation/materialize.rs and src/isolation/state.rs
Foreground finalizersrc/isolation/finalize.rs
Console instance rendering/actionssrc/console/manager/render/list.rs and src/console/manager/input/list.rs
DinD/Testcontainers smoke coveragetests/dind_e2e.rs