Skip to content

Notebooks

The built-in notebook MCP server lets agents create, execute, and manage Jupyter notebooks and Quarto documents (.qmd). Agents talk to it through MCP tools; humans view results in JupyterLab.

Installation

The notebook server runs as a Docker container. The image builds automatically on deploy (src/pynchy/agent/build.sh) or on first use by the MCP manager. To build manually:

docker build -t pynchy-mcp-notebook:latest -f src/pynchy/agent/mcp/notebook.Dockerfile .

How it works

The notebook server is a first-party plugin running as a Docker container, sandboxing all kernel execution. It provides:

  • MCP tools for agents — start kernels, execute code cells, add markdown, save/load notebooks
  • JupyterLab for humans — web frontend on port 8888 for viewing and interacting with notebooks
  • IPython kernels managed directly via jupyter_client — no jupyter_server overhead

Agents work with notebooks only through MCP tools, which handle kernel lifecycle, cell execution, output collection, and auto-saving. They can also read and edit .qmd files directly from the workspace, since notebooks live in groups/<workspace>/notebooks/ (mounted at /workspace/group/notebooks/ inside the container).

Enabling notebooks

Add "notebook" to a workspace's MCP server list:

[workspaces.research]
mcp_servers = ["notebook"]

No per-workspace config needed. The server scopes notebooks to groups/<workspace>/notebooks/ and sets the kernel's working directory to groups/<workspace>/, so the agent references workspace files naturally (e.g., pd.read_csv("mydata.csv")).

Each workspace gets its own server instance — no cross-workspace contamination.

Default format: Quarto (.qmd)

Notebooks default to .qmd (Quarto markdown) rather than .ipynb. Quarto documents are plain text with code fences — easier for agents to read and easier to diff in version control:

## Sales Analysis

Loaded the Q4 sales data and filtered for the US region.

```{python}
import pandas as pd
df = pd.read_csv("sales.csv")
df[df.region == "US"].head()
```

The US region accounts for 62% of total revenue.

To work with .ipynb files instead, include the extension in the notebook name (e.g., start_kernel(name="analysis.ipynb")).

MCP tools

All tools are available once the workspace includes "notebook" in its server list.

Kernel lifecycle

start_kernel(name?) — Start an IPython kernel. If name refers to an existing notebook, load and re-execute all code cells to restore kernel state (session rehydration). If omitted, generate a name like 2026-02-20-ailing-amoeba.

shutdown_kernel(kernel_id) — Save and shut down a kernel.

list_kernels() — List active kernels with their notebook names and cell counts.

Working with cells

execute_cell(kernel_id, code) — Execute Python code. Returns outputs (text, image file paths, errors). The cell and outputs append to the notebook and auto-save to disk. Images save to <notebook>_files/ alongside the notebook.

add_markdown(kernel_id, content) — Add a markdown cell. Auto-saves to disk.

File operations

save_as(kernel_id, name) — Save under a different name. Use .qmd or .ipynb extension to pick format.

read_notebook(name) — Read an existing notebook without starting a kernel. Returns structured cell contents.

list_notebooks() — List saved notebooks with sizes and modification times.

Agent workflow

A typical agent session:

start_kernel()
  → kernel_id: "a1b2c3d4", notebook: "2026-02-20-ailing-amoeba.qmd"

execute_cell(kernel_id="a1b2c3d4", code="import pandas as pd\ndf = pd.read_csv('sales.csv')\ndf.head()")
  → outputs: [{"type": "result", "text": "   date    revenue\n0  ..."}]

add_markdown(kernel_id="a1b2c3d4", content="## Sales Analysis\nLoaded sales data for Q4.")

execute_cell(kernel_id="a1b2c3d4", code="df.describe()")
  → outputs: [{"type": "result", "text": "       revenue\ncount  ..."}]

save_as(kernel_id="a1b2c3d4", name="q4-sales-analysis")
  → notebook: "q4-sales-analysis.qmd", cells: 4

shutdown_kernel(kernel_id="a1b2c3d4")

Session rehydration

When an agent calls start_kernel(name="q4-sales-analysis") for an existing notebook, the server:

  1. Starts a fresh IPython kernel
  2. Loads the notebook from disk
  3. Re-executes all code cells sequentially to restore kernel state
  4. Returns a summary: cell count and any errors during replay

Agents can resume work across sessions without losing state. The kernel starts fresh, but replaying the cells restores all variables and imports.

Agent-friendly output

At startup, the kernel auto-configures libraries for text-friendly output:

  • Pandas — wide column display, increased row/column limits for readable tables
  • Matplotlib — non-interactive Agg backend (avoids GUI window attempts)
  • Images — all image/png outputs (matplotlib plots, PIL images, etc.) save to <notebook>_files/cell_N.png. The agent gets the file path instead of raw base64.

Finding saved images

execute_cell returns image paths relative to the notebook directory. From the container filesystem, images are at:

/workspace/group/notebooks/<notebook>_files/cell_N.png

For example, a notebook named q4-sales-analysis that produces a plot in cell 3:

  • Relative path (returned by tool): q4-sales-analysis_files/cell_3.png
  • Container path: /workspace/group/notebooks/q4-sales-analysis_files/cell_3.png

If a single cell produces multiple images, they get suffixed: cell_3_1.png, cell_3_2.png, etc.

Installing dependencies

The container ships with common data-science libraries (pandas, matplotlib, numpy). When you need something else, install it at runtime from a notebook cell using uv:

import subprocess
subprocess.run(["uv", "pip", "install", "--system", "seaborn"], check=True)

Use --system because the container runs without a virtual environment. Installed packages last for the lifetime of the container but are lost when it restarts (idle timeout, deploy, manual stop). Add frequently needed packages to src/pynchy/agent/mcp/notebook.Dockerfile instead.

Direct file access

Notebooks live inside the workspace folder (/workspace/group/notebooks/), so agents can also:

  • Read .qmd files from previous sessions directly with filesystem tools
  • Edit earlier cells by modifying the .qmd file, then re-executing with start_kernel(name=...)
  • Include notebook files in git commits

Viewing notebooks

JupyterLab runs alongside the MCP server on port 8888 (no auth — designed for Tailscale access). Open http://pynchy-server:8888 to browse and interact with notebooks.

Notebooks auto-save on every execute_cell and add_markdown call, so JupyterLab always shows the latest state.

Idle timeout

The MCP manager stops the container after 30 minutes of no MCP tool calls. Notebook files persist on the host (bind-mounted workspace), so no data is lost. The next agent tool call starts a fresh container.


Want to customize this? Write your own plugin — see the Plugin Authoring Guide. Have an idea but don't want to build it? Open a feature request.