Skip to content

ACP Client

Source attribution: The protocol description in this page is drawn from the official ACP documentation at agentclientprotocol.com/get-started/introduction. See the ACP Server page for the server-side perspective and the full protocol overview.

The ACP client is the editor or IDE component that connects to an ACP agent server and drives the user-facing interaction. Where the server page covers the agent’s implementation, this page covers the client’s responsibilities: establishing the connection, managing session state from the editor side, rendering streamed responses, and handling the protocol’s error conditions.

The client has three primary jobs:

  1. Transport management — Spawn or connect to the agent, maintain the connection, handle reconnection
  2. Session lifecycle — Create, resume, and fork sessions; persist session IDs across editor restarts
  3. Stream rendering — Consume the event stream and progressively display agent output, diffs, and tool activity in the editor UI

The hard parts on the client side are different from the server side:

  • Process management for local agents: The editor must spawn the agent process, pipe its stdio, handle crashes, and restart it without losing session state
  • Incremental rendering: Events arrive as a stream. The UI must show partial output without blocking on the complete response
  • Error disambiguation: A transport error (network dropped) is different from an agent error (bad request) is different from a tool error (file write failed). Each needs different recovery
  • Multi-session UX: Some editors allow multiple ACP sessions simultaneously (e.g., one per workspace folder). The client must route each session’s events to the right UI panel

For a local agent, the connection lifecycle is:

  1. Spawn: Editor spawns openoxide --acp as a child process, capturing its stdin/stdout
  2. Initialize: Client sends the initialize request with its own capabilities
  3. Ready: Agent responds with ServerCapabilities; editor updates its UI accordingly
  4. Session create/resume: Client sends session/create or session/resume
  5. Prompt turns: Client sends prompts, receives event streams
  6. Shutdown: Client sends shutdown request; agent cleans up and exits

For a remote agent, step 1 is replaced by establishing an HTTP connection to the agent’s URL.

The initialize request is the first message sent. It establishes protocol version compatibility and exchanges capability lists:

Client → Server:

{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "0.1",
"clientInfo": {
"name": "my-editor",
"version": "1.0.0"
},
"capabilities": {
"diff_rendering": true,
"terminal_display": true,
"slash_commands": true
}
}
}

Server → Client:

{
"jsonrpc": "2.0",
"id": 1,
"result": {
"protocolVersion": "0.1",
"serverInfo": {
"name": "openoxide",
"version": "0.1.0"
},
"capabilities": {
"file_read": true,
"file_write": true,
"terminal_exec": true,
"web_search": false,
"streaming": true,
"session_fork": true,
"slash_commands": [
{ "command": "/review", "description": "Review recent changes" },
{ "command": "/explain", "description": "Explain selected code" }
]
}
}
}

The client uses the returned capability list to enable or disable UI features. If session_fork is false, the “fork session” button is hidden. If terminal_exec is false, terminal output widgets are not rendered.

Session IDs are opaque strings that the server assigns at creation time. The client’s responsibilities:

Persisting session IDs: The client must store session IDs to disk (keyed by workspace path) so that when the editor restarts, it can resume the previous session rather than starting fresh.

Working directory scoping: When creating a session, the client sends the workspace root path. The server uses this to scope file operations — the agent won’t read or write files outside this directory without explicit permission.

{
"method": "session/create",
"params": {
"workingDir": "/home/user/myproject",
"config": {
"model": "claude-sonnet-4-5",
"auto_commits": true
}
}
}

Fork workflow: The client initiates a fork by sending the source session ID and the turn index at which to branch:

{
"method": "session/fork",
"params": {
"source_session_id": "sess_abc123",
"branch_at_turn": 5
}
}

The server returns a new session ID. The client opens a new editor panel for the forked session.

For streaming responses, the server sends a sequence of events over the SSE channel (HTTP) or as a series of JSON-RPC notifications (stdio). The client must handle each event type:

Event TypeClient Action
text_deltaAppend text to the response panel
tool_useShow a “calling tool…” indicator in the activity pane
tool_resultRender the tool result (file diff, terminal output, etc.)
diff_blockRender a side-by-side diff widget with accept/reject controls
plan_stepShow the agent’s current step in a progress indicator
errorDisplay error message, offer retry option
doneMark the turn as complete, enable the input field again

The key implementation constraint is that the UI must remain responsive while events arrive. The event consumption loop must run on a background thread or async task, posting UI updates to the main thread via a channel.

The diff_block event deserves special attention. When the agent writes a file, it sends a structured diff event:

{
"type": "diff_block",
"path": "src/auth/login.rs",
"unified_diff": "--- a/src/auth/login.rs\n+++ b/src/auth/login.rs\n@@ -42,7 +42,12 @@\n ..."
}

The client is responsible for rendering this. Options:

  1. Inline diff widget: Show the diff in the conversation panel using colored +/- lines
  2. Side-by-side view: Open the file in a split editor showing before/after
  3. Deferred display: Show a “file changed” notification, let the user open the diff later

Crucially, the file has already been modified on disk by the time the diff event arrives. The diff is informational — it tells the client what happened — not a proposal for approval. If the editor wants a “review before apply” model, it must implement this by keeping the agent’s writes in a staging area (the server would need to support this mode) rather than accepting the file write immediately.

When the user presses Escape or a Stop button mid-turn, the client sends a cancellation:

{
"jsonrpc": "2.0",
"method": "$/cancelRequest",
"params": {
"id": 42
}
}

The $/cancelRequest method name is borrowed from LSP. After sending, the client should:

  1. Mark the current turn as “cancelled” in the UI
  2. Wait for the server to send a done event (the server must acknowledge cancellation)
  3. If no done arrives within a timeout, assume the server is stuck and force-restart

OpenOxide’s TUI mode (openoxide --tui, the default) is itself an ACP client connecting to the agent core. This makes the architecture clean: the agent core is always an ACP server, and the TUI is one of several possible clients. An editor plugin, a web frontend, or a CI runner could connect to the same agent core.

pub struct AcpClient {
transport: Box<dyn Transport>,
session_id: Option<SessionId>,
capabilities: Option<ServerCapabilities>,
next_id: AtomicU64,
}
pub trait Transport: Send + Sync {
async fn send(&self, msg: &[u8]) -> anyhow::Result<()>;
async fn recv(&self) -> anyhow::Result<Vec<u8>>;
}

Two transport implementations:

  • StdioTransport wraps the agent’s child process stdin/stdout
  • HttpTransport wraps reqwest for remote agents
impl AcpClient {
pub async fn connect_local(binary: &Path, args: &[&str]) -> anyhow::Result<Self> {
let mut child = tokio::process::Command::new(binary)
.args(args)
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::inherit()) // forward stderr to terminal for debugging
.spawn()?;
let stdin = child.stdin.take().unwrap();
let stdout = child.stdout.take().unwrap();
let transport = StdioTransport::new(stdin, stdout);
let mut client = AcpClient::new(Box::new(transport));
client.initialize().await?;
Ok(client)
}
async fn initialize(&mut self) -> anyhow::Result<()> {
let response = self.request("initialize", json!({
"protocolVersion": ACP_VERSION,
"clientInfo": { "name": "openoxide-tui", "version": env!("CARGO_PKG_VERSION") },
"capabilities": { "diff_rendering": true, "terminal_display": true }
})).await?;
self.capabilities = Some(serde_json::from_value(response["result"]["capabilities"].clone())?);
Ok(())
}
}

Session IDs are written to .openoxide/sessions/.current in the workspace:

pub async fn save_session_id(workspace: &Path, id: &SessionId) -> anyhow::Result<()> {
let path = workspace.join(".openoxide/sessions/.current");
tokio::fs::create_dir_all(path.parent().unwrap()).await?;
tokio::fs::write(&path, id.as_str()).await?;
Ok(())
}
pub async fn load_session_id(workspace: &Path) -> anyhow::Result<Option<SessionId>> {
let path = workspace.join(".openoxide/sessions/.current");
match tokio::fs::read_to_string(&path).await {
Ok(s) => Ok(Some(SessionId::from(s.trim()))),
Err(e) if e.kind() == io::ErrorKind::NotFound => Ok(None),
Err(e) => Err(e.into()),
}
}

On startup, if a session ID exists and a resume attempt succeeds, the conversation history is restored. If the resume fails (session expired on the server), the error is shown and a new session is created.

pub async fn run_turn(
&mut self,
content: String,
event_tx: mpsc::Sender<AcpEvent>,
) -> anyhow::Result<()> {
self.request_streaming("prompt/send", json!({
"session_id": self.session_id,
"content": [{ "type": "text", "text": content }]
}), |event| {
let tx = event_tx.clone();
async move { let _ = tx.send(event).await; }
}).await
}

The TUI’s rendering loop receives events from event_tx and updates the display. AcpEvent is a Rust enum mirroring the JSON event types: TextDelta(String), ToolUse(ToolUseEvent), DiffBlock(DiffBlockEvent), Done, Error(String).

If the agent process exits unexpectedly (crash, OOM kill), the transport’s recv() returns an error. The client handles this:

loop {
match self.transport.recv().await {
Ok(data) => process_event(&data),
Err(e) if is_connection_closed(&e) => {
// Log the crash, attempt restart
self.reconnect().await?;
}
Err(e) => return Err(e),
}
}
async fn reconnect(&mut self) -> anyhow::Result<()> {
// Re-spawn the agent process
let new_transport = StdioTransport::connect_to_new_process().await?;
self.transport = Box::new(new_transport);
self.initialize().await?;
// Resume the existing session — history is persisted on disk
self.resume_session().await?;
Ok(())
}

Because sessions are persisted to disk by the server (in .openoxide/sessions/), a crash-and-restart restores the full conversation history as long as the session files are intact.


Between the editor spawning the agent process and the agent being ready to accept initialize, there’s a startup window where writes to stdin are buffered or lost. The client must wait for the process to signal readiness before sending initialize. Conventional approach: wait for the first newline or the first valid JSON object on stdout. Do not use a fixed sleep delay — startup time varies by machine load.

Not all session resume attempts succeed. The server may have pruned old sessions, the session file may be corrupt, or the server may have restarted with a clean state. The client must handle session/resume returning an error gracefully: display a message (“Previous session expired — starting new session”), create a new session, and update the persisted session ID. Never silently retry with a new session while pretending the conversation history is intact.

When the ACP client spawns the agent as a child process and pipes its stdio, the agent’s stderr is still connected to the terminal (if Stdio::inherit() is used for stderr). This can cause the agent’s debug log output to appear on the user’s terminal interleaved with the editor output. The correct behavior is to redirect agent stderr to a log file, not the terminal. Many agent implementations also use PTY allocation for certain operations — these will fail if the agent is running as a subprocess with a piped stdin rather than a real TTY. The agent must detect this and degrade gracefully.

In editors like VS Code, extensions can be deactivated and reactivated without the editor itself restarting. If the ACP client is created in the extension’s activate() hook and the extension is deactivated, the child process (agent) continues running orphaned. On re-activation, the extension may try to spawn a second agent process, resulting in two instances competing for the same .openoxide/sessions/ storage. The client must check for a running agent process (by PID file or port check) before spawning a new one.