Recording

Record your virtual display.
Watch it back.

Capture anything running in Xvfb as MP4 video with live panel overlays. Browsers, GUI apps, terminal sessions — anything with a window. SDKs for Python, Ruby, Go, TypeScript, Node, and Java. Generate interactive HTML reports where clicking a step seeks the video to that exact moment.

from thea import RecorderClient client = RecorderClient("http://localhost:9123") client.start_display() with client.recording("login_test"): # run your application pass client.download_recording("login_test", "login_test.mp4")
client = Recorder::Client.new("http://localhost:9123") client.start_display client.recording("login_test") do # run your application end client.download_recording("login_test", "login_test.mp4")
client := thea.NewClient("http://localhost:9123") client.StartDisplay(ctx) stop, _ := client.Recording(ctx, "login_test") defer stop() // run your application client.DownloadRecordingToFile(ctx, "login_test", "login_test.mp4")
import { RecorderClient } from "thea-recorder"; const client: RecorderClient = new RecorderClient({ url: "http://localhost:9123", }); await client.startDisplay(); await client.recording("login_test", async () => { // run your application }); await client.downloadRecordingToFile("login_test", "login_test.mp4");
const { RecorderClient } = require("thea-recorder"); const client = new RecorderClient({ url: "http://localhost:9123" }); await client.startDisplay(); await client.recording("login_test", async () => { // run your application }); await client.downloadRecordingToFile("login_test", "login_test.mp4");
# start display and record recorder start-display recorder start-recording --name login_test # run your test, then stop and download recorder stop-recording recorder download --name login_test -o login_test.mp4
# start display and record curl -X POST http://localhost:9123/display/start curl -X POST http://localhost:9123/recording/start \ -H "Content-Type: application/json" \ -d '{"name": "login_test"}' # run your test, then stop and download curl -X POST http://localhost:9123/recording/stop curl -o login_test.mp4 http://localhost:9123/recordings/login_test
pip install thea-recorder

Failures are a black box

The screen was right there doing the thing. You just weren't watching.

Without recorder

The debugging loop from hell

A test fails in CI. You read the stack trace. You try to reproduce locally. You can't. You add more logging, push, wait 20 minutes for CI, read logs again. You do this three more times before you find the actual problem: a modal was covering the button.

1. Read the failure log
2. Try to reproduce locally
3. Add more logging
4. Push, wait for CI, read logs again
5. goto 3
With recorder

You stop guessing. You start watching.

Every scenario is recorded as MP4 video with a live panel overlay showing test status, current step, and any context you want. The HTML report lets you click a step and the video seeks to that exact moment. You see the modal covering the button in five seconds.

1. Open report.html
2. Click the failed step
3. Watch what happened
4. Fix it

Everything you need.
Nothing you don't.

Minimal dependencies, easy install.

MP4 video capture

Records the Xvfb virtual display via ffmpeg with H.264 encoding. One video per scenario. Mobile-compatible MP4 output.

Panel overlay system

Named columns below the viewport with live-updating text. Custom width and height per panel. Smart scrolling keeps the active line visible. Supports custom bg_color and opacity for transparent or coloured overlays.

📖

Interactive reports

Single-page HTML with embedded video players. Click any step to seek the video. Playback highlights the current step automatically.

🔌

Framework agnostic

Works with Behave, pytest-bdd, Cucumber, or any runner. The report takes a simple list of dicts — no framework coupling.

📦

Minimal dependencies

No pip install chain headaches. Just add it to your Docker image and go.

🔨

Docker ready

Example Dockerfile included. Works with any CI system that supports Docker. Just mount a volume for the recordings.

Layout validation

Automatic warnings when panels overlap, exceed the canvas, or don't fit. Generate an SVG testcard to visualise the layout before you record.

🎥

Video composition

Tile multiple recordings side-by-side, stacked, or in a grid. Add timed highlight borders to call attention to specific moments.

🎮

Human-like interaction

Built-in Director for realistic mouse movement, natural typing rhythm, and window management. Smooth trajectories, not instant teleportation.

Live streaming

Watch displays in real-time via MJPEG streams. Open /display/view in your browser or embed the stream in any HTML page. Built-in dashboard at /dashboard shows all sessions at once.

Event log

Every action is timestamped. Track display starts, recordings, panel updates, and more with the per-session event log. Poll /events for live updates.

Screenshot capture

Grab JPEG frames from live displays or extract frames from recorded videos at any time offset. Perfect for CI thumbnails and test reports.

📍

Recording annotations

Mark key moments during a recording with timestamped labels. Annotations are returned when the recording stops and appear in the event log for richer reports and debugging.

HTTP server + native SDKs

One server process. Any language. The recorder runs as a service that your test suite talks to via HTTP.

Your code Go · Py · Ruby · TS · Node Test Suite Your App on DISPLAY :99 drives Recorder manages Recorder Server thea serve :9123 HTTP Xvfb + ffmpeg virtual display :99 renders on MP4 + HTML report.html produces download via HTTP API

Record tests in any language.
Under 10 lines of code.

SDKs for Python, Ruby, Go, TypeScript, Node, and Java. Minimal dependencies, easy install. Same API everywhere.

from thea import RecorderClient client = RecorderClient("http://localhost:9123") client.start_display() with client.recording("login_test"): # run your application pass client.download_recording("login_test", "login_test.mp4")
require "recorder" client = Recorder::Client.new("http://localhost:9123") client.start_display client.recording("login_test") do # run your application end client.download_recording("login_test", "login_test.mp4")
client := thea.NewClient("http://localhost:9123") client.StartDisplay(ctx) stop, _ := client.Recording(ctx, "login_test") defer stop() // run your application client.DownloadRecordingToFile(ctx, "login_test", "login_test.mp4")
import { RecorderClient } from "thea-recorder"; const client: RecorderClient = new RecorderClient({ url: "http://localhost:9123", }); await client.startDisplay(); await client.recording("login_test", async () => { // run your application }); await client.downloadRecordingToFile("login_test", "login_test.mp4");
const { RecorderClient } = require("thea-recorder"); const client = new RecorderClient({ url: "http://localhost:9123" }); await client.startDisplay(); await client.recording("login_test", async () => { // run your application }); await client.downloadRecordingToFile("login_test", "login_test.mp4");
# start display and record recorder start-display recorder start-recording --name login_test # run your test, then stop and download recorder stop-recording recorder download --name login_test -o login_test.mp4
# start display and record curl -X POST http://localhost:9123/display/start curl -X POST http://localhost:9123/recording/start \ -H "Content-Type: application/json" \ -d '{"name": "login_test"}' # run your test, then stop and download curl -X POST http://localhost:9123/recording/stop curl -o login_test.mp4 http://localhost:9123/recordings/login_test

Drop-in. Five minutes.

Start the server, connect from any language, or use the CLI directly from bash.

# features/environment.py — drop-in Behave integration from thea import Recorder, generate_report def before_all(context): recorder = Recorder(output_dir="/app/recordings", display=99) recorder.add_panel("status", title="Status", width=120) recorder.add_panel("scenario", title="Scenario") recorder.start_display() context.recorder = recorder context.recorded_videos = [] def before_scenario(context, scenario): context.recorder.start_recording(scenario.name) context.recorder.update_panel("status", "Running") def after_scenario(context, scenario): video = context.recorder.stop_recording() context.recorded_videos.append({ "feature": scenario.feature.name, "scenario": scenario.name, "status": scenario.status.name, "video": video, }) def after_all(context): context.recorder.cleanup() generate_report(context.recorded_videos, title="E2E Test Report")
# Dockerfile FROM python:3.12-slim RUN apt-get update && apt-get install -qyy --no-install-recommends \ chromium-driver xvfb ffmpeg x11-xserver-utils \ fonts-dejavu-core \ && rm -rf /var/lib/apt/lists/* RUN pip install thea-recorder behave selenium COPY features/ /app/features/ WORKDIR /app ENTRYPOINT ["behave", "--no-capture"]
# Build and run $ docker build -t my-e2e-tests . $ docker run --shm-size=2g \ -v $(pwd)/recordings:/app/recordings \ my-e2e-tests # Open the report $ open recordings/report.html # That's it. Every scenario is an MP4. # The report has clickable step timelines. # You never stare at a stack trace again.

Reports that tell the whole story

Click a step. Watch what happened. Dark-themed, responsive, single-file HTML.

recordings/report.html
E2E Test Report
Automated test recordings
8
Scenarios
7
Passed
1
Failed
Invoice Management PASS
REC
app.ledgerco.dev/invoices
Invoices
+ New Invoice
INV #ClientAmountStatus
INV-041Acme Corp$12,400PAID
INV-042Globex Inc$8,750DUE
INV-043Initech$3,200PAID
Status
Running
Steps
  Given 3 invoices exist
  When I open the invoice list
* Then I see all invoices
0:00 Given 3 invoices exist
0:02 When I open the invoice list
0:04 Then I see all invoices
0:06 And totals are correct
Tax Submission FAIL
app.ledgerco.dev/tax/submit
Submit BAS Return
Period
Q4 2025
GST collected
$14,280.00
GST credits
$6,140.00
Submit to ATO
Error: ABN validation failed — gateway timeout
Status
FAILED
Steps
  Given Q4 tax data
  When I submit the return
* Then I see a receipt
0:00 Given Q4 tax data
0:04 When I submit the return
0:09 Then I see a receipt

Passing tests are documentation too

Recordings of green tests are living proof that your features actually work, updated every time your suite runs.

Share with QA

QA can communicate issues effectively by sharing documented video evidence of both successes and failures. No need to reproduce bugs or explain what happened — the recording shows exactly what the screen did.

Onboard new developers

New team members can watch the test suite to understand what the app does. It's a demo reel that's always current, generated automatically from the test run.

Prove it to stakeholders

Product managers want to see features working, not test output. Recordings are proof that the sprint deliverables actually function on a real screen.

Audit and compliance

Some industries require evidence that testing was performed. Video recordings with timestamped steps are a far stronger artefact than a JUnit XML report.

Scripted product demos

You don't need a test framework. Write a Python script that drives an application, narrates each scene in the overlay panel, and produces a polished MP4. Ship a fresh demo video with every release — no presenter required.

Parallel user simulation

One server manages multiple independent sessions. Pretend to be Alice, Bob, and Carol at the same time: each gets their own virtual display, their own recording, and their own panel overlay — driven by threads or separate processes from a single script.

Record CLI tools. Not just browsers.

Launch xterm on the Thea virtual display, type commands with visible keystrokes using xdotool, and narrate each step with panel overlays. The result is a polished MP4 demo of any command-line tool — no screen recorder, no post-production.

# record_all.py — orchestrate terminal demos with Thea from thea import Recorder, generate_report from demos import quickstart, verifications recorder = Recorder( output_dir="./output", display=99, browser_size="1280x1080", # terminal area (panels add 300px below) framerate=24, ) # Overlay panels: scene title + current step recorder.add_panel("scene", title="Demo", width=200) recorder.add_panel("step", title="Step") recorder.start_display() # Record each demo as a separate MP4 for name, module in [("quickstart", quickstart)]: recorder.start_recording(name) module.record(recorder) # drives xterm + updates panels recorder.stop_recording() recorder.cleanup() generate_report(videos, title="CLI Demos")
# demos/terminal.py — xdotool helpers for typed commands import subprocess, time TYPE_DELAY_MS = 40 # ms between keystrokes PROMPT_PAUSE = 0.4 # pause before typing RESULT_PAUSE = 1.5 # pause after output def launch_xterm(geometry="110x40"): """Launch xterm fullscreen on the Xvfb display.""" proc = subprocess.Popen([ "xterm", "-fa", "JetBrains Mono", "-fs", "18", "-bg", "#000000", "-fg", "#00ff00", "-e", "bash", "--login", ]) # Wait for window, then maximize with xdotool wid = _wait_for_window() _maximize(wid) return proc, wid def type_text(text, delay_ms=TYPE_DELAY_MS): subprocess.run(["xdotool", "type", "--delay", str(delay_ms), text]) def run_command(cmd, pause_after=RESULT_PAUSE): """Type a command, press Enter, wait for output.""" time.sleep(PROMPT_PAUSE) type_text(cmd) subprocess.run(["xdotool", "key", "Return"]) time.sleep(pause_after)
# demos/quickstart.py — a single demo script from demos.terminal import launch_xterm, run_command, wait, clear_screen def record(client): """Run the quickstart demo on the Thea display.""" client.update_panel("scene", "Quickstart") proc, wid = launch_xterm() try: # Step 1 client.update_panel("step", "1. Register a domain") run_command("dm domains add example.com", pause_after=3) # Step 2 client.update_panel("step", "2. Add a forwarding rule") run_command("dm rules add example.com hello you@gmail.com") # Step 3 client.update_panel("step", "3. List all rules") run_command("dm rules list", pause_after=3) clear_screen() # Step 4 client.update_panel("step", "4. Activity log") run_command("dm rules log example.com hello", pause_after=3) client.update_panel("step", "Done!") wait(2) finally: proc.terminate()
! Xresources — style the xterm for recording ! Classic green-on-black terminal look XTerm*faceName: JetBrains Mono XTerm*faceSize: 18 XTerm*background: #000000 XTerm*foreground: #00ff00 XTerm*cursorColor: #00ff00 XTerm*scrollBar: false XTerm*internalBorder: 16 XTerm*termName: xterm-256color XTerm*loginShell: true XTerm*saveLines: 5000 ! ANSI colours XTerm*color0: #000000 XTerm*color1: #cc0000 XTerm*color2: #00cc00 XTerm*color3: #cccc00 XTerm*color10: #55ff55 XTerm*color15: #ffffff
# Dockerfile — everything needed for terminal recordings FROM python:3.12-slim RUN apt-get update && apt-get install -qyy --no-install-recommends \ xvfb ffmpeg x11-xserver-utils xdotool xterm \ fonts-jetbrains-mono fonts-dejavu-core \ && rm -rf /var/lib/apt/lists/* RUN pip install --no-cache-dir thea-recorder # xterm styling COPY Xresources /root/.Xresources # Your CLI tool + demo scripts COPY demos/ demos/ COPY record_all.py . ENTRYPOINT ["python", "record_all.py"] # Run with: docker compose up # Output: ./output/*.mp4 + report.html

Visible keystrokes

xdotool types each character with a configurable delay. Viewers see commands appear letter-by-letter, just like a real terminal session.

Panel narration

Update overlay panels between steps to explain what's happening. The panel bar renders below the terminal viewport and is baked into the MP4.

Mock APIs

Run a mock HTTP server alongside the recording. The CLI talks to the mock, producing realistic output without needing real infrastructure.

One server. Multiple sessions.

A single thea serve process manages any number of concurrent recording sessions. Each session gets its own Xvfb display, its own ffmpeg process, and its own panel overlay — completely isolated from every other session.

import threading from thea import RecorderClient def user_session(user_id): client = RecorderClient("http://localhost:9123") # Each session gets its own Xvfb display, auto-allocated client.create_session(f"user_{user_id}") client.use_session(f"user_{user_id}") client.start_display() client.add_panel("status", title="Status") with client.recording(f"user_{user_id}_checkout") as result: # drive this user's session independently… client.update_panel("status", "Checking out") print(result.path, result.elapsed) client.delete_session(f"user_{user_id}") threads = [threading.Thread(target=user_session, args=(i,)) for i in [1, 2, 3]] for t in threads: t.start() for t in threads: t.join()
from thea import RecorderClient client = RecorderClient("http://localhost:9123") client.start_display() client.add_panel("scene", title="Scene", width=260) client.add_panel("action", title="Action") def narrate(scene, action): client.update_panel("scene", scene) client.update_panel("action", action) with client.recording("product_demo_v2") as result: narrate("Login", "Navigating to the login page") # driver.get("https://app.example.com/login") narrate("Login", "Entering credentials") narrate("Dashboard", "Key metrics at a glance") narrate("Reports", "Exporting monthly PDF…") print(f"Demo: {result.path} ({result.elapsed:.1f}s)") client.cleanup()
# Create two independent sessions curl -X POST http://localhost:9123/sessions -d '{"name":"alice"}' # → {"name":"alice","display":100,"url_prefix":"/sessions/alice"} curl -X POST http://localhost:9123/sessions -d '{"name":"bob"}' # → {"name":"bob","display":101,"url_prefix":"/sessions/bob"} # Each session has its own Xvfb, panels, and recording curl -X POST http://localhost:9123/sessions/alice/display/start curl -X POST http://localhost:9123/sessions/bob/display/start curl -X POST http://localhost:9123/sessions/alice/recording/start \ -d '{"name":"alice_checkout"}' curl -X POST http://localhost:9123/sessions/bob/recording/start \ -d '{"name":"bob_checkout"}' # … both sessions record independently … curl -X POST http://localhost:9123/sessions/alice/recording/stop curl -X POST http://localhost:9123/sessions/bob/recording/stop curl -X DELETE http://localhost:9123/sessions/alice curl -X DELETE http://localhost:9123/sessions/bob

See your layout before you record.

Build up panels, set custom widths and heights, then validate the layout automatically or generate an SVG testcard to see exactly what the recording will look like — before ffmpeg captures a single frame.

Testcard output
THEA LAYOUT TESTCARD viewport 1920x1080 (0,0) app status 120x300 (0,1080) panel scenario 1800x300 (120,1080) panel Canvas: 1920 x 1380
from thea import RecorderClient client = RecorderClient("http://localhost:9123") client.start_display() # Panels return warnings automatically result = client.add_panel("status", title="Status", width=120) result = client.add_panel("scenario", title="Scenario") # Explicit validation info = client.validate_layout() print(info["valid"]) # True print(info["warnings"]) # [] # Get an SVG testcard svg = client.testcard() with open("layout.svg", "w") as f: f.write(svg)
# Add panels (warnings print to stderr) thea add-panel --name status --title Status --width 120 thea add-panel --name scenario --title Scenario # Short panel for a compact bar thea add-panel --name timer --height 60 # Validate the layout thea validate-layout # Save an SVG testcard thea testcard -o layout.svg # Suppress warnings if you want thea --ignore-warnings start-recording --name demo
# Add a panel (response includes warnings) curl -X POST http://localhost:9123/panels \ -H "Content-Type: application/json" \ -d '{"name":"status","title":"Status","width":120,"height":200}' # → {"name":"status","width":120,"height":200,"warnings":[]} # Validate the layout curl http://localhost:9123/validate-layout # → {"warnings":[],"valid":true} # Get an SVG testcard curl http://localhost:9123/testcard -o layout.svg

Automatic warnings

Every add_panel and start_recording call validates the layout and returns warnings if panels overlap, exceed the canvas, or if the bar is taller than the allocated display space. The CLI prints them to stderr; pass --ignore-warnings to suppress.