Capture anything running in Xvfb as MP4 video with live panel overlays. Browsers, GUI apps, terminal sessions — anything with a window. SDKs for Python, Ruby, Go, TypeScript, Node, and Java. Generate interactive HTML reports where clicking a step seeks the video to that exact moment.
The screen was right there doing the thing. You just weren't watching.
A test fails in CI. You read the stack trace. You try to reproduce locally. You can't. You add more logging, push, wait 20 minutes for CI, read logs again. You do this three more times before you find the actual problem: a modal was covering the button.
Every scenario is recorded as MP4 video with a live panel overlay showing test status, current step, and any context you want. The HTML report lets you click a step and the video seeks to that exact moment. You see the modal covering the button in five seconds.
Minimal dependencies, easy install.
Records the Xvfb virtual display via ffmpeg with H.264 encoding. One video per scenario. Mobile-compatible MP4 output.
Named columns below the viewport with live-updating text. Custom width and height per panel. Smart scrolling keeps the active line visible. Supports custom bg_color and opacity for transparent or coloured overlays.
Single-page HTML with embedded video players. Click any step to seek the video. Playback highlights the current step automatically.
Works with Behave, pytest-bdd, Cucumber, or any runner. The report takes a simple list of dicts — no framework coupling.
No pip install chain headaches. Just add it to your Docker image and go.
Example Dockerfile included. Works with any CI system that supports Docker. Just mount a volume for the recordings.
Automatic warnings when panels overlap, exceed the canvas, or don't fit. Generate an SVG testcard to visualise the layout before you record.
Tile multiple recordings side-by-side, stacked, or in a grid. Add timed highlight borders to call attention to specific moments.
Built-in Director for realistic mouse movement, natural typing rhythm, and window management. Smooth trajectories, not instant teleportation.
Watch displays in real-time via MJPEG streams. Open /display/view in your browser or embed the stream in any HTML page. Built-in dashboard at /dashboard shows all sessions at once.
Every action is timestamped. Track display starts, recordings, panel updates, and more with the per-session event log. Poll /events for live updates.
Grab JPEG frames from live displays or extract frames from recorded videos at any time offset. Perfect for CI thumbnails and test reports.
Mark key moments during a recording with timestamped labels. Annotations are returned when the recording stops and appear in the event log for richer reports and debugging.
One server process. Any language. The recorder runs as a service that your test suite talks to via HTTP.
SDKs for Python, Ruby, Go, TypeScript, Node, and Java. Minimal dependencies, easy install. Same API everywhere.
Start the server, connect from any language, or use the CLI directly from bash.
Click a step. Watch what happened. Dark-themed, responsive, single-file HTML.
Recordings of green tests are living proof that your features actually work, updated every time your suite runs.
QA can communicate issues effectively by sharing documented video evidence of both successes and failures. No need to reproduce bugs or explain what happened — the recording shows exactly what the screen did.
New team members can watch the test suite to understand what the app does. It's a demo reel that's always current, generated automatically from the test run.
Product managers want to see features working, not test output. Recordings are proof that the sprint deliverables actually function on a real screen.
Some industries require evidence that testing was performed. Video recordings with timestamped steps are a far stronger artefact than a JUnit XML report.
You don't need a test framework. Write a Python script that drives an application, narrates each scene in the overlay panel, and produces a polished MP4. Ship a fresh demo video with every release — no presenter required.
One server manages multiple independent sessions. Pretend to be Alice, Bob, and Carol at the same time: each gets their own virtual display, their own recording, and their own panel overlay — driven by threads or separate processes from a single script.
Launch xterm on the Thea virtual display, type commands with visible keystrokes using xdotool, and narrate each step with panel overlays. The result is a polished MP4 demo of any command-line tool — no screen recorder, no post-production.
xdotool types each character with a configurable delay. Viewers see commands appear letter-by-letter, just like a real terminal session.
Update overlay panels between steps to explain what's happening. The panel bar renders below the terminal viewport and is baked into the MP4.
Run a mock HTTP server alongside the recording. The CLI talks to the mock, producing realistic output without needing real infrastructure.
A single thea serve process manages any number of concurrent recording
sessions. Each session gets its own Xvfb display, its own ffmpeg process, and its own
panel overlay — completely isolated from every other session.
Build up panels, set custom widths and heights, then validate the layout automatically or generate an SVG testcard to see exactly what the recording will look like — before ffmpeg captures a single frame.
Every add_panel and start_recording call
validates the layout and returns warnings if panels overlap, exceed
the canvas, or if the bar is taller than the allocated display space.
The CLI prints them to stderr; pass --ignore-warnings
to suppress.