Note on paths: All absolute filesystem paths in this article are anonymized. The directory hierarchy and logic correspond to the actual incident; the specific names (e.g.
~/projects/private/) are generic.
"Claude Code is a time machine" — that sounds like AI marketing. I'd have scrolled past it myself if I hadn't needed it. But I'm writing that sentence now, not because I want to sell Claude Code, but because it rescued my private blog repo. Almost. 84 percent of it, to be precise.
Every developer knows the rule: back up your work. I tell my colleagues to do it. I tell friends to do it. My private blog repo was sitting locally, no remote, no TimeMachine. That's the kind of hypocrisy you forgive yourself for a long time — until it catches up with you. In my case, it caught up with me on April 4, 2026, at 01:38:23, when I typed one character too many.
In the end I had 84 percent of the repo back. Not because I had a backup. But because Claude Code produces a byproduct that — if you didn't build it for that purpose but happen to have it anyway — behaves like a backup. That's a subtle but important distinction. And that distinction is the subject of this article.
The Mistake as a Cleanup Cascade
To put the mistake in context: for a few months now I've been working on claude-forge, my own learning project alongside the SAGE/SageKit development (a larger AI tooling system I'm building on the side). Two ClaudeSDKClient instances — one simulates a developer, the other is Claude Code itself — autonomously develop projects in dialogue. As an experimental setup, very instructive. As a producer of artifacts in hard-to-predict places on the filesystem, also.
That Friday evening, claude-forge had dropped its test outputs not, as configured, into ~/projects/private/claude-forge-test/, but scattered across several locations: ~/projects/tui-calc, ~/projects/.superpowers/…, ~/projects/private/{calculator,tui-calculator,calc-tui}. Before going to bed I wanted to clean up.
The Zsh history shows the sequence in the small hours of Saturday, April 4, down to the second:
01:37:15—rm -Rf ~/projects/tui-calc01:38:10—rm -Rf ~/projects/.superpowers01:38:23—rm -Rf ~/projects/private
Thirteen seconds between the second and the third command. The third one was wrong. The plan had been rm -Rf ~/projects/private/calc-tui — the path slipped one level too high. It wasn't the first rm -rf of the evening. It was the sixth in a cleanup sequence spanning roughly 90 seconds. I had become too practiced. Routine beats attention, that's well known, and yet you underestimate it regularly.
The peculiar thing about this command: it didn't take everything with it. I cancelled it — probably by instinct — after fractions of a second. Two files survived, and they were precisely the two least dramatic ones: CLAUDE.md and generate-feature-images.py. Not a single article, not a single commit hash, not a single metadata file. Two helper files, like a cosmic joke.
Taking Stock the Next Morning
The first reflex after the abort wasn't panic, it was searching — but not on the MacBook. I spent the twelve minutes between Ctrl+C and tmutil listlocalsnapshots on the iPad with the Claude app. Partly to figure out what, after a fresh rm -rf on APFS, was still rescuable at all. Above all, though, to avoid any further write operation on the MacBook for as long as I didn't yet know which deleted blocks could still be saved. The iPad's answer confirmed the instinct: "Stop! Don't write anything else to the affected volume!"
Only at 01:48, after the Claude app on the iPad had pointed me in a rough direction, did I go back to the terminal — with tmutil listlocalsnapshots /. Shortly after: tmutil listlocalsnapshots /System/Volumes/Data, then diskutil info /, then a Spotlight search via mdfind over source-code artifacts. At 01:50 a USB stick named TRANSER was in the machine, and I copied onto it, just in case, everything Spotlight still knew under ~/projects/private/. As professional as the first reflex looked, it was just as clear after twenty minutes: there was no active TimeMachine configuration. There were no useful local snapshots. There was a USB stick with a handful of salvaged blobs.
Saturday afternoon, after a few hours of sleep, the sober picture:
.git/indexintact: around 60 KB, 490 files referenced.git/objects/partial: 132 loose blobs survived, 59 of them importable as valid Git objects.git/pack/: gone, completely- 34 commit objects survived, the reflog with 260 entries intact
At 15:14 I started a Claude Code session that would later get the custom title "recovery", and I described the situation in the JSONL the way it now reads verbatim, typos and all:
"I made a mistake yesterday — ran
rm -Rf ~/projects/private/. Cancelled immediately. But the blogs are gone and the .git directory is corrupt."
That's the state you start a recovery session in — half a sentence, one typo, one irreversible command.
The Claude analysis afterwards, paraphrased: the index file knows 490 files. But 419 of 463 content blobs are missing. The salvaged blobs have no overlap with the missing ones — they're old versions, not the current state. So the index knows precisely what's missing. Exactly that is what's missing.
That's the kind of finding that has a clarifying effect on a Saturday afternoon. You have an inventory, but no goods.
The Cobbler's Children Moment
So here I am, the developer who tells other people to make backups, trying to reconstruct four months of work from half a .git directory: 15 blog articles, each in up to five language and platform variants (svenpoeche DE+EN, MF Blog DE, Medium EN, DEV.to EN), plus more than 55 Excalidraw diagrams and their corresponding feature images. This is no longer a hobby collection. This is a publishing infrastructure.
The honest thought at that point was: give up. The published articles are on svenpoeche.de, on the Mayflower blog, on Medium, and on DEV.to. Those I can copy back. But what I can't copy back: drafts, metadata files, Excalidraw sources, half-finished translations, research notes. Everything that comes into being between "idea" and "published" and exists nowhere other than on the hard drive I just partially erased.
Roughly at this point the second wave of embarrassment arrived, which had less to do with the incident itself than with the role I usually play as a senior developer: I'm the one who, on every new project, raises the question of whether the backup concept is in place. Not in a lecturing tone, just as a matter-of-course question. The same question that, in my case, would have been getting a dishonest answer for months.
Then I said a sentence to Claude that, in hindsight, I would call the turning point — not a flash of insight, but a routine instruction I'd given in other contexts before, just never with this much at stake:
The Jackpot Sentence
The sentence, verbatim, was:
"Just search through your JSONL files."
I'd typed it in German, in a hurry, typos and all, and the JSONL has every keystroke on record. It wasn't meant as a flash of insight, it was a hypothesis tossed off in passing. The underlying idea was trivial: Claude Code stores every conversation locally as a JSONL file. Every Write tool call contains the full file content as a parameter. Every Edit contains the fragment being replaced and the new fragment. Bash heredocs contain the content as part of the command. If I had been using Claude Code since December for my blog articles, then the complete text of every file Claude had ever written for me had to be in those JSONL logs.
Every file Claude Code has ever written for me is archived in the JSONL logs of my sessions — not as a deliberate backup, but as a byproduct of conversation storage.
Claude's reaction, paraphrased: in the project directory there are 247 JSONL files. 92 contain Write calls, 54 contain Edits. I'll start with the extraction.
This could have been a 30-minute recovery if JSONL had remained the only source. It wasn't. Only when comparing the JSONL yield with what the .git/index knew as "total" did it become clear how much JSONL doesn't cover — binary files never, older articles from the Roo Code era not, assets produced by external tools not. Over the following hours five more sources came into play, and in the days afterwards some follow-up work. The full recovery session ran from Saturday, April 4, through April 16 — twelve days of bouncing between editor, terminal, and Claude. Here's where the technical part starts.
The JSONL Time Machine: How It Exists
Claude Code stores every session as a JSONL file under ~/.claude/projects/<path-slug>/<session-id>.jsonl. Each line is a JSON event: user message, assistant response, tool call, tool result. For recovery purposes, three event types are relevant: Write (full file content as a content parameter), Edit (old and new fragment as old_string/new_string), and Bash commands with heredoc syntax (content in the command string). Not persisted are Read outputs — they appear in the tool result, but Claude doesn't replay that state on session resume, so you don't need them for recovery. Binary files never reach JSONL in the first place.
Retention is controlled via the cleanupPeriodDays setting in ~/.claude/settings.json. Default: 30 days. If you don't adjust it, you lose everything after a month.
That the JSONL logs from April 4 still existed at all wasn't coincidence — but it wasn't a backup-planning move either. By default, Claude Code cleans up after 30 days. I had raised cleanupPeriodDays to a multiple of that months earlier, for a completely different reason: I came from Roo Code, where sessions stayed persistent. When I looked for an old session in Claude Code and it simply didn't exist anymore, the frustration was large enough to bump the value up drastically on the spot. A past self cursed about a tool switch — and in doing so rescued a future self's repo.
That's the kind of causal chain that can be told in retrospect as though it had been planned. It wasn't. Without the Roo Code frustration I wouldn't have touched the default value, and then the JSONLs of the crucial December-through-March phase would already have been history by the time of the April 4 incident. That fundamentally changes the equation: not "Claude Code has a backup that saves you," but "Claude Code has a byproduct that saves you if you happen to have cranked up a retention setting two months earlier for a completely different reason." This isn't a recommendation for Anthropic to market the feature. It's a warning not to rely on it.
Source 1: JSONL Conversation Logs
The most important source. The method was conceptually simple: scan all JSONL files in the project directory, extract Write and Edit tool calls, group by target path, keep the chronologically latest version per path. The ISO-8601 timestamps in the events allow lexicographic sorting, which corresponds to chronological sorting.
In addition to the main session, I had over the past few months regularly set up Git worktrees for parallel article work. Each worktree gets its own project slug in Claude Code (the path slug contains the worktree path), so its own JSONL files. A total of 81 files from parallel worktree sessions that I would have missed on the first pass if the script had only searched under the main project slug.
The core pattern of the extractor:
import json
from pathlib import Path
from collections import defaultdict
def extract_writes(jsonl_path: Path) -> dict[str, str]:
"""Returns {file_path: latest_content} from a Claude Code JSONL session."""
by_path: dict[str, list[tuple[str, str]]] = defaultdict(list)
with open(jsonl_path, "r", encoding="utf-8") as f:
for line in f:
try:
event = json.loads(line)
except json.JSONDecodeError:
continue
if event.get("type") != "assistant":
continue
ts = event.get("timestamp", "")
for block in event.get("message", {}).get("content", []) or []:
if not isinstance(block, dict) or block.get("type") != "tool_use":
continue
tool = block.get("name")
inp = block.get("input", {}) or {}
if tool == "Write" and inp.get("file_path") and inp.get("content") is not None:
by_path[inp["file_path"]].append((ts, inp["content"]))
elif tool == "Edit" and inp.get("file_path") and inp.get("new_string") is not None:
by_path[inp["file_path"]].append((ts, inp["new_string"]))
return {path: sorted(entries)[-1][1] for path, entries in by_path.items()}
Edge cases underestimated on the first pass: long file contents Claude pipes into a Bash command via heredoc — those don't appear in the Write block but in the Bash block as part of the command string. Edit calls only supply a fragment, not a whole file; if you only have edits, you have to chain diffs across multiple events, otherwise the frame is missing. And without a deterministic tiebreaker between simultaneous events (rare, but possible), you get back the wrong version.
The yield: 206 files from the main session, 81 from the worktree sessions, for a total of 287 files reconstructed — mostly blog articles, metadata files, and scripts.
★ Insight ──────────────────────────────────────────────────────────────────The JSONL logs are an unintentional audit trail. Every file Claude Code has ever written for you sits there in plain text. That has implications beyond recovery — for security, for compliance, for the question of what visible history an AI coding agent actually leaves behind.
────────────────────────────────────────────────────────────────────────────
Source 2: The Git Reflog Trick
In parallel with the JSONL extraction, I took a closer look at the surviving remainder of .git/. The pack file was gone, but .git/logs/HEAD had weathered the destruction — as a plain text file the reflog is still readable even when object databases have gaps. 260 entries were in it, spanning 2025-12-12 through 2026-04-03 at 12:37 (the last commit before the rm).
Reflog provides commit hashes, timestamps, and messages — not the actual blobs. But for the reconstruction of status.md, commit ordering, and cross-project history, that was worth gold: 203 commits could be fully reconstructed, including parent chain, as a cleanly sortable list.
The core command for it:
git reflog --format='%H %at %gs' \
| while read hash time msg; do
printf '%s %s %s\n' "$(date -r "$time" '+%Y-%m-%d %H:%M:%S')" "$hash" "$msg"
done
The limitation is as obvious as it is important: reflog references commit objects. If the objects lived in the pack file and the pack file is gone, reflog will give you the message but no diff. The blob contents had to come from the other sources. Still: the reflog reliably answered the question "what did I even have?" — and for recovery that's often half the battle.
★ Insight ──────────────────────────────────────────────────────────────────Git stores commit information in three places:
objects/,refs/,logs/. Treatingobjects/alone as "the truth" means underestimating the reflog. It's a separate log, not an index on objects — and it survives catastrophes that hit the object store.
────────────────────────────────────────────────────────────────────────────
Source 3: file-history
In parallel with the JSONL logs, Claude Code places versioned snapshots of files it touches under ~/.claude/file-history/. The filenames are hash-based with @v1, @v2 suffixes — each version separate, independent of the JSONL conversation view.
The usefulness is different from JSONL's: JSONL is the conversation view — every Write/Edit as an event with a timestamp. file-history is the file view — the versions per file. The overlap isn't 1:1, because file-history also contains states that originated from Read contexts or were laid down as an intermediate step in multi-stage edits. 9,377 snapshots in total on my system, filtered to blog-relevant paths: 141 files.
The larger part of that overlapped with the JSONL yield and confirmed its content — that was the actual value. In reconstruction, "two sources say the same thing" is the simplest form of trust you can get. A handful of states came out of file-history that I hadn't found in JSONL: intermediate states that had only received a snapshot as a Read context, without having originated in the session as a Write. That gap between "Claude writes it" and "Claude touches it" I wouldn't have closed from JSONL alone.
Source 4: Publii DB as an Unexpected Backup Layer
Publii, my local blog publishing software, stores all posts in a SQLite database: ~/Documents/Publii/sites/sven-poeche/input/db.sqlite. Media sit separately under input/media/. Neither lived under ~/projects/private/ — so the rm -Rf hadn't even grazed that layer.
The extraction was a SQL one-liner:
SELECT slug, title, text, status,
datetime(created_at / 1000, 'unixepoch') AS created,
datetime(modified_at / 1000, 'unixepoch') AS modified
FROM posts
WHERE status IN ('published', 'draft', 'published,is-page')
ORDER BY modified DESC;
15 articles in two languages yielded 30 entries in the posts table, plus three static pages (About, Impressum, Kontakt). Plus seven drafts, some of which were no longer current. Recovery-relevant in the narrow sense: the 30 article entries as a direct Markdown source.
The assets were the second piece of good news: 69 out of 71 feature images and Excalidraw PNGs from the media folder were complete. The limitation of this source is obvious — Publii only stores what was published. Drafts I hadn't maintained through Publii aren't in there. Neither are metadata files. For the production side, however: almost complete.
Source 5: PNG → Excalidraw via Claude Vision
32 Excalidraw diagrams were missing. They weren't in JSONL (the Excalidraw JSONs were produced with an external tool, not through Claude Code's Write), not in file-history, not in Publii. Only the PNG renderings existed as blog assets — that is, what Publii had exported from the Excalidraw sources.
The approach was honestly an experiment I hadn't thought would be this viable: point Claude Vision at the PNGs and have it reconstruct the JSON source. Three parallel subagents, each getting a batch of PNGs and a prompt along these lines:
"This PNG shows an Excalidraw diagram. Identify all elements (rectangles, arrows, text, colors, positions). Produce a valid Excalidraw JSON file that reproduces the diagram equivalently."
The yield was better than expected. 11 English diagrams could be reconstructed directly from PNGs. I didn't reprocess the 21 German variants via Vision — the German and English diagrams have the same structure and the same positions, only the labels differ. So: take the English JSON, translate the text labels, write the German JSON back. Claude does that as a text task in seconds. Together: 32 reconstructed diagrams. Combined with the JSONL and file-history finds, I got to 55 out of 56.
Quality note so no illusions take hold: the reconstructed diagrams are structurally equivalent, not bit-exact. Element positions deviate by a few pixels, arrow curvatures aren't identical, font metrics differ slightly. For a blog diagram whose purpose is to convey information: entirely sufficient.
★ Insight ──────────────────────────────────────────────────────────────────Claude Vision isn't an image parser in the classical sense — it's a structure recognizer. That changes what "data loss" even means when you still have the exported rendering. Losing the source and still having the image was, until recently, a one-way problem. It isn't anymore.
────────────────────────────────────────────────────────────────────────────
Source 6: curl_cffi + Chrome Cookies Against Cloudflare
Four Medium articles existed only as published versions. The Markdown sources were gone, the metadata files too. Medium didn't offer a per-article Markdown export when I went looking for one — and even if it had: Medium is Cloudflare-protected, and anyone who wants automated content access regularly hits Turnstile challenges.
The solution — and here the irony dial goes up — is a combination of curl_cffi (impersonates the TLS fingerprint of a real Chrome 120) and browser_cookie3 (reads my Medium session cookies from Chrome Profile 1). The result is an HTTP request Cloudflare can't tell apart from a real browser — because, from a TLS perspective, that's exactly what it is.
from curl_cffi import requests
import browser_cookie3
import os
cookie_file = os.path.expanduser(
"~/Library/Application Support/Google/Chrome/Profile 1/Cookies"
)
cj = browser_cookie3.chrome(domain_name="medium.com", cookie_file=cookie_file)
session = requests.Session()
for c in cj:
session.cookies.set(c.name, c.value, domain=c.domain)
r = session.get(url, impersonate="chrome120")
r.raise_for_status()
print(r.text)
To be clear: this isn't "bypassing Medium" — I'm scraping my own articles. I have the right to this content, I just don't have access to the source files anymore. Four articles came back as Markdown. An aside fitting for this article: I scrape my own Medium articles in order to be able to write this article about the recovery — which I'll then publish on Medium again.
What Didn't Come Back
Honest inventory instead of a heroic final total: 78 files remained missing. Roughly categorized:
| Category | Reason |
|---|---|
| Feature image intermediate versions (JPEG, PNG) | Binary files don't show up in JSONL |
| Old draft versions without a Write event | Only in the pack file, lost there |
| Reference material (screenshots, PDFs) | Binary, not in JSONL |
One Excalidraw set (sdd1-speckit-workflow) | No PNG available, so nothing for Vision either |
Temporary notes and .tmp files | Never in a Write/Edit tool |
In the days after that Saturday, about 52 more files turned up later — worktree JSONLs I'd missed on the first pass, additional Publii media assets, scattered Excalidraw reconstructions. The final state after the recovery work was done: 412 of 490 files, around 84 percent, verifiable via git ls-files | wc -l in the initial commit f3d9c09 ("Initial commit: Recovered repository"). Of the missing 78, most are either reconstructable via platforms or were reference material, not source data.
The Actual Lesson
Claude Code's JSONL logs weren't a backup. They were a byproduct that worked like a backup. That's a subtle but important distinction. A backup is a deliberate decision. A byproduct is luck — and luck isn't a strategy.
The 84-percent recovery wasn't proof that Claude Code replaces backups. It was proof that I had been very, very lucky for a long time. That the JSONL logs were still there at all traced back to frustration over a tool switch, not to any backup planning. That the Publii DB lived outside ~/projects/private/ was a historical accident. That the PNGs lived in Publii media rather than in the repo, likewise. Every single one of these sources could have been missing. Then it wouldn't have been 84 percent, but 40 or 20.
Every developer knows the rule. Few follow it. I'm now one of the few — not because I wanted to, but because there was no alternative anymore. The insight that crept in over twelve recovery days is unspectacular: backup isn't a question of intelligence. I haven't "now understood that backup matters" — I knew that before. I simply have no way left to tell myself it doesn't concern me. That's a difference in honesty with yourself, not in knowledge.
What's Different Now
TimeMachine has been running since the incident. Hourly snapshots over SMB to a dedicated share, not the vague "time machine" metaphor the title lets me get away with. cleanupPeriodDays in ~/.claude/settings.json stays elevated — for a clearly documented reason now, no longer just out of Roo Code frustration. The blogs repo now has a remote on a self-hosted Gitea. After every commit a push hook, no more local islands.
That same evening of April 4, while the recovery session was still running and had reached a stable state, the three rm -Rf commands I had actually meant to run at 01:38 arrived — this time deliberately: calculator at 20:24, tui-calculator at 23:16:24, calc-tui at 23:16:40. The Zsh history shows all three. Twenty-one hours detour for three cleanup commands.
Three Things to Take Away
No ten-point list, no checklist. Three things:
If you use Claude Code productively: raise
cleanupPeriodDaysin~/.claude/settings.json. I did it back then out of frustration, not out of foresight. In hindsight one of the best decisions I've made without knowing I was making it. JSONL logs are an unintentional audit trail — you never know when you'll want to look back, and the default of 30 days is too short for that.If you rely on
.gitas a backup: it isn't a backup. Git protects versions, not hardware. Anrm -rfwipes both. The pack file sits in the same directory as the working tree, and that's exactly where a disasterrmaims.If you preach about backups: check your own setup. Today. Not tomorrow. This article only became possible because I got lucky with my own earlier decisions — not because I was smart.
The time machine exists. But it's not what you should build on.

