Claude Memories Convertisseur : json --> txt

by Albertine Meunier

ABOUT

MEMORY EXTRACTION: WHEN AI REVEALS ITS DATA
Since January 2025, Claude offers a feature as discreet as it is essential: memory export. Activated in settings (Settings > Capabilities > Memories), this option allows Claude to progressively build a user profile — silent accumulation of details, preferences, professional and personal contexts. What presents itself as a conversational improvement service actually resembles a form of consensual surveillance: each interaction feeds a structured data file.
The export generates a memories.json file — raw format, unreadable for most users. JSON (JavaScript Object Notation) is a machine language, designed for algorithmic processing efficiency, not human readability. This opacity is not trivial: it maintains users in an asymmetrical relationship with their own data. You can retrieve this file, certainly. A JSON file exportable on demand but unreadable. The irony is complete.

SELECT YOUR JSON FILE

Converting...

JSON: OPACITY AS STRATEGY
The JSON format is not chosen by chance. It's a machine language optimized for algorithms, deliberately hostile to human reading. Braces, brackets, commas, quotation marks — hostile syntax that keeps 99% of users ignorant of what the AI knows about them.
This technical barrier is not an ergonomic accident. It's a policy of opacity. As long as the data remains unreadable, it remains exploitable without contestation. GDPR imposes the right to access personal data, but doesn't specify that this data must be comprehensible. Anthropic, like others, plays this card fully.

THE CONVERSION TOOL: A GESTURE OF REAPPROPRIATION
Faced with this calculated opacity, Albertine Meunier proposes a simple and radical countermeasure: a web conversion interface that transforms cryptic JSON into readable text, in French or in English. Accessible, this tool democratizes access to the data that Claude accumulates about you and does what Anthropic should have done from the start: make the data truly accessible.
The gesture is twofold. First practical: it makes visible what was deliberately obscured. Then political: it reaffirms the fundamental right to read one's own data in human language. Within the framework of Albertine's memories.json (distorsion) project, this tool is not just a utility — it's an instrument of soft resistance, a crack in the data capture apparatus.

WHY IT'S IMPORTANT TO LOOK AT YOUR OWN MEMORIES
Claude's memory is not neutral. It progressively builds a digital double of the user, archived on Anthropic's servers. Each project, each artistic preference, each biographical detail becomes exploitable data. The commercial argument — "to serve you better" — poorly masks the accumulation logic underlying all conversational AI systems.
By making these memories readable, the tool breaks the interpretive monopoly. It allows seeing what the AI retains, what it forgets, how it categorizes, how it hierarchizes information. It's a citizen audit tool for artificial intelligence.

WHAT CLAUDE RETAINS (AND WHAT YOU SHOULD KNOW)
Test the export. You'll discover that Claude classifies your information according to arbitrary categories:
*Work context: your projects, your skills, your deadlines
*Personal context: your dog, your hobbies, your weaknesses
*Top of mind: your current concerns, your urgencies
*Brief history: your trajectory reconstructed by the AI

Who defined these categories? Not you. Who decides what's "important" to retain? Not you. Who validates the accuracy of the inferences? Nobody. Claude builds a digital double of you according to a taxonomy you never negotiated.
Claude decides what to forget. Some conversations disappear from memories, others persist. According to what criteria? Mystery. Algorithmic forgetting is never neutral — it's an invisible editorial policy that shapes your profile.

MANUAL FOR DOCUMENTARY DISOBEDIENCE
* Activate memories in Claude (Settings > Capabilities > Memories)
* Export your memories.json file (download icon)
* Upload it to albertinemeunier.net/Claude_Memories_Conversion/en
* Retrieve a readable, dated, formatted text file

The memories.json file is provided in English. Once uploaded, translation takes 2-3 minutes. Time necessary for the AI to translate the portrait it has drawn of you.

#DADA: SLIPPING SHIT INTO THE DATA MACHINE
This tool is part of Albertine's dada approach: hijacking surveillance tools to reveal their mechanics. "Slipping shit into the data machine, feeding poetic poison" — the raw formula summarizes an artistic practice that doesn't seek frontal destruction but injection of sand grains, aesthetic virus in smooth code.
Making readable is already corrupting. It's transforming data into narrative, profile into portrait, algorithm into personal archive. It's taking back control of what machines claim to know about us.

OPEN SOURCE, OPEN DATA, OPEN REVOLT
The tool is freely accessible. No login, no tracking, no monetization. Just a sober interface — black background, Roboto font, white buttons — that does exactly what it announces. The Python scripts (written by Claude) are available for anyone wishing to audit, modify, or deploy them elsewhere. Transparency against opacity. Reappropriation against alienation.

CONCLUSION: CLAUDE DOESN'T LOVE YOU
Claude is polite. Claude is helpful. Claude "remembers" your preferences. But Claude is not your friend. Claude is a conversational value extraction system that transforms your words into exploitable data.
Memory is not a service. It's a trap. The more you talk, the more you feed the profile. The richer the profile, the more Claude seems to "understand". But this understanding is only a statistical illusion — pattern correlation, probabilistic prediction, never real empathy. Albertine Meunier's tool doesn't free you from the system. But it shows you the cage. And seeing the cage is the first gesture of refusal.
Export. Convert. Read. Resist!

Text co-written with Claude.ai (amazing its own self-reflexive capabilities!))