Skip to content

Container Credential Exposure — Beyond Env Injection

Status: Open — design proposal

jackin’s credential-forwarding flow injects auth tokens into agent containers via docker run -e KEY=VALUE. This applies uniformly to:

  • ANTHROPIC_API_KEY, CLAUDE_CODE_OAUTH_TOKEN (Claude)
  • OPENAI_API_KEY (Codex)
  • GH_TOKEN, GITHUB_TOKEN, GH_ENTERPRISE_TOKEN (GitHub CLI)
  • any future axis the credential source pattern adds

Once a token is in the container’s process env, several host-side surfaces expose it to anyone who can talk to the local Docker daemon:

SurfaceToken visible
docker inspect <container> Env fieldyes
docker exec <container> envyes
ps auxe of an in-container process (the agent)yes
Container filesystem — Sync-mode role-state files (hosts.yml for gh, auth.json for Codex, .credentials.json / account.json for Claude)yes
Docker daemon’s container-state JSON on disk (/var/lib/docker/...)yes

The threat model is “anyone with docker inspect access can read the token,” which on macOS Docker Desktop means the operator’s UID (low marginal exposure — the operator already has Keychain access to the host’s gh token). On Linux the docker group is typically root-equivalent, which broadens the surface.

This is the same pattern Claude / Codex auth axes already use, so the GitHub auth feature didn’t introduce a new posture — but landing it brought the question into focus, and a long-term answer benefits every auth axis at once.

A target end state that fully addresses the exposure:

  1. Tokens never appear in docker inspect Env. The container’s process env has the token only inside processes that genuinely need it, and only for as long as needed.
  2. Tokens never persist on the container filesystem. No hosts.yml, no .credentials.json, no auth.json left at rest in the container’s writable layer or in jackin’s role-state directory.
  3. Token rotation on the host (or inside any container) propagates to every consumer within seconds, without restart. (This goal is shared with the Live bidirectional auth sync item.)
  4. Per-call audit trail. Every credential delivery is logged with timestamp, requester, requested name, approved/denied. Operators can trace “who used my token, when, for what.”
  5. Per-call revocability. Operators can yank a token mid-session and the next request fails immediately, even if other agents in other containers were using it a millisecond ago.
  6. Named grants only. Ambient credential-shaped variables and sockets such as GH_TOKEN, GITHUB_TOKEN, SSH_AUTH_SOCK, *_API_KEY, and *_SECRET do not cross into the container merely because a stack integration or role hints at them. They appear only as explicit credential grants in the session contract.

The host-bridge daemon item already proposes a flow that targets these goals — the agent calls a secret.request MCP tool, the daemon prompts the operator (TouchID / polkit), the value is delivered as an opaque handle that the runtime substitutes into one command, and nothing lands at rest. That is the long-term answer. This item exists to capture the trade-offs and brainstorm intermediate stops along the way.

Implementation strategies — trade-off survey

Section titled “Implementation strategies — trade-off survey”

Five candidate paths, in increasing order of “structural rigor” and “implementation cost”:

What ships today. Token visible in docker inspect Env. Same posture as every container that takes secrets from the host today (which is most of them).

  • Pros: simple; works with every consumer (CLI tools, MCP servers, GitHub-Actions-style scripts read env without ceremony).
  • Cons: broadest exposure surface listed above.
  • Use case: single-operator local dev, accepted threat model. Documented in Design principles.

jackin writes the token to a tmpfs file on the host, bind-mounts it into the container at a known path (e.g. /run/secrets/gh-token), entrypoint reads the file and either:

  • 2a. Re-exports it into env at process startup → tokens hidden from docker inspect Env, but docker exec env still leaks them once the entrypoint sources.
  • 2b. Leaves the file as-is and configures the consumer to read the file directly. Works for gh (file-based hosts.yml, already happens under Sync) and git (via !gh auth git-credential credential helper). Breaks for consumers that read env without a file fallback (e.g. github-mcp-server reads GITHUB_TOKEN).

Trade-offs:

  • Pros: clean docker inspect Env. Operator pattern matches Docker Compose secrets: stanza, which experienced operators already understand.
  • Cons: requires a per-consumer credential-helper or env-shim. Doesn’t fully eliminate exposure (option 2a) or breaks consumers (option 2b). New mount path to maintain.
  • Implementation effort: moderate. Existing provision_*_auth helpers already write files; the launch surface needs to drop the -e flags conditionally and the entrypoint needs the source-from-file shim.
  • Use case: intermediate stop between status quo and the daemon-based answer.

Docker’s first-class secrets API stores values encrypted at rest in the swarm Raft store and mounts them as files in containers. Available only when the daemon runs in swarm mode.

Trade-offs:

  • Pros: standard primitive; mature; encrypted at rest in swarm store.
  • Cons: swarm mode is a heavy infrastructure change. jackin’s launcher uses plain docker run. Operators don’t run swarm for local dev. Migrating is out of scope.
  • Use case: rejected. Captured here so the option is explicitly considered and ruled out.

4. macOS Keychain bridge over a control socket

Section titled “4. macOS Keychain bridge over a control socket”

A small host-side helper opens the Keychain (operator-authed via Touch ID or login password), exposes credential reads over a Unix domain socket bound at ~/.jackin/run/, and the container reaches it via bind-mount.

Trade-offs:

  • Pros: token never leaves macOS Keychain except into the helper’s memory and the requesting process’s stdin/env. docker inspect shows nothing. macOS-native crypto (Keychain ACLs, Touch ID gate).
  • Cons: macOS-specific; Linux hosts need a parallel path (libsecret? plain file?). The helper IS a daemon, so this reduces to “build the daemon” anyway.
  • Use case: functionally equivalent to the host-bridge daemon for the macOS side. Captured here as the macOS-specific framing of the same architecture.

Per the host-bridge roadmap and the jackin daemon umbrella:

  • jackin daemon runs on the operator’s host, holds a per-operator Unix socket.
  • An auto-registered MCP server inside every agent container exposes secret.request(name, scope, reason) and secret.use_in(template) tools.
  • Agent calls the tool → MCP server forwards request to daemon over the socket → daemon prompts operator (Touch ID / polkit / password) → daemon resolves the secret from the operator’s chosen source (Keychain / 1Password / etc.) → returns an opaque handle to the agent.
  • The handle is consumed in exactly one command (secret.use_in). The agent runtime substitutes the handle’s value into the spawned process’s env or stdin and the substitution is invisible to the agent’s chat history, tool output, and tracing.
  • Container env stays empty for credentials. docker inspect, docker exec env, container fs all clean.

Trade-offs:

  • Pros: structurally addresses every exposure surface listed in the Problem section. Same daemon hosts other reactive features (live auth sync, agent attention prompts) so the cost amortizes.
  • Cons: large architectural lift. Requires the daemon’s lifecycle / install / control-socket / security posture to be designed first (the umbrella item). Per-consumer integration: tools that read env directly (MCP servers especially) need a runtime-level shim that converts handles to ephemeral env at process spawn — same shim from option 2a, but the source is the daemon instead of a tmpfs file.
  • Use case: the canonical answer. Phase 3 of the jackin daemon implementation phasing.

The recommended trajectory:

  1. Now. Document the exposure in the operator-facing security model and authentication overview so operators understand the trust boundary they’re consenting to. (Quick-win edit, follow-up PR.)
  2. Medium-term. Land option 2 (file-mount) for the consumers that support it (gh and git push — both already file-aware via hosts.yml and !gh auth git-credential). Drop the env exports under Sync mode where the file alone is sufficient. GitHub’s token mode keeps env injection because the entire point of that mode is “use this scoped value as GH_TOKEN.” Consumers that need env (github-mcp-server) keep getting env until the daemon path lands.
  3. Long-term. Option 5. The daemon’s per-axis adapter for secret.request is the structural fix. Tokens never enter the container.
  • SSH keys. jackin deliberately does not forward SSH keys (authentication overview). This item covers token-style credentials only.
  • Cross-host (remote agent) credential injection. The Kubernetes phase of jackin will need its own version of this story — out of scope here.
  • Operator-to-container credential delivery for non-secret config (e.g. GH_HOST is operator-set but not sensitive; pass-through env is fine and stays).
  • Per-consumer shim. The medium-term file-mount approach needs a runtime-level shim that reads /run/secrets/<name> and either (a) re-exports as env at agent-process spawn or (b) configures the consumer to read the file. Which consumers benefit from (a) vs (b)? gh is (b) today; git is (b) via gh’s helper; github-mcp-server would need (a). Is there a clean place for jackin to inject (a) without modifying every MCP server? The agent runtime’s process spawn API is the natural seam — Claude Code and Codex both spawn child processes through their tool-use loop; instrumenting the spawn side would let jackin substitute env from a file at exec time.
  • Daemon adapter contract. What does secret.request(name) return when the operator denies? Structured error vs. silent None? The MCP-server-side abstraction for the agent has to be uniform across credentials whose existence the operator denies vs. credentials they explicitly forbid for the workspace.
  • Audit log persistence. Where does the per-call delivery log live? ~/.jackin/log/credential-bridge.jsonl with rotation? Operator-readable / -searchable? Same place as the host-bridge audit log, or separate?
  • Compose-style mount for token-mode env values. Today […github.env].GH_TOKEN resolves at launch time to a String the launcher pushes via -e. In the file-mount path, that resolved String would land in the tmpfs file instead. The launcher would need to choose which secrets go to env vs. file — likely driven by a per-consumer registry (MCP servers want env, gh is file-happy, git is helper-happy).

The exposure exists across every auth axis (Claude / Codex / GitHub). The mitigations (file-mount, daemon bridge, Keychain integration) all apply uniformly. Designing each axis’s escape from env injection separately would produce three different shapes — same anti-pattern the jackin daemon umbrella exists to prevent. One item, one design pass, three downstream adapters.

  • jackin daemon — umbrella for the long-running host process the canonical fix depends on.
  • Host bridge — sibling item; the secret.request flow is the user-visible shape of option 5 above.
  • Live bidirectional auth sync — sibling item; the daemon also keeps host and container in lock-step on token rotation, which combined with this item’s “tokens never persist in the container” goal eliminates token drift entirely.
  • Credential proxy (existing roadmap line) — earlier idea about proxy-based credential injection; the host-bridge / secret.request flow is the operator-mediated answer to the same problem.
  • Credential source pattern — future unified credential resolver. This item’s per-consumer registry (which secrets go to env vs file vs daemon-handle) plugs into that resolver.
  • Design principles — repo-wide design principles. The exposure surface this item addresses sits inside the “Container is the trust boundary, not the prompt” principle: jackin shrinks the boundary further by reducing what crosses into the container’s env.